Query Text
stringlengths
10
36.2k
Ranking 1
stringlengths
29
3.76k
Ranking 2
stringlengths
35
36.2k
Ranking 3
stringlengths
20
36.2k
Ranking 4
stringlengths
20
3.13k
Ranking 5
stringlengths
20
36.2k
Ranking 6
stringlengths
20
36.2k
Ranking 7
stringlengths
12
36.2k
Ranking 8
stringlengths
13
36.2k
Ranking 9
stringlengths
12
36.2k
Ranking 10
stringlengths
12
36.2k
Ranking 11
stringlengths
19
3.56k
Ranking 12
stringlengths
12
4.86k
Ranking 13
stringlengths
19
4.57k
score_0
float64
1
2,503,614,422,727,750,000,000,000,000,000,000B
score_1
float64
-1,774,251,208,094,388,000,000,000,000,000,000,000,000,000
1,449,686,829,779,149,500,000,000,000,000,000B
score_2
float64
-1,775,809,332,270,340,600,000,000,000,000,000,000,000,000
1,252,100,180,433,478,800,000,000,000,000,000B
score_3
float64
-1,778,303,553,520,916,000,000,000,000,000,000,000,000,000
1,035,078,120,488,042,400,000,000,000,000,000B
score_4
float64
-1,779,345,012,302,625,800,000,000,000,000,000,000,000,000
862,766,939,159,153,400,000,000,000,000,000B
score_5
float64
-1,779,355,035,320,888,800,000,000,000,000,000,000,000,000
582,199,894,699,651,300,000,000,000,000,000B
score_6
float64
-1,779,355,047,010,457,300,000,000,000,000,000,000,000,000
492,450,637,876,531,360,000,000,000,000,000B
score_7
float64
-1,779,355,047,659,752,800,000,000,000,000,000,000,000,000
366,958,150,342,842,800,000,000,000,000,000B
score_8
float64
-1,779,355,047,660,214,600,000,000,000,000,000,000,000,000
225,233,376,751,889,840,000,000,000,000,000B
score_9
float64
-1,779,355,047,660,215,000,000,000,000,000,000,000,000,000
102,699,465,831,101,860,000,000,000,000,000B
score_10
float64
-1,779,355,047,660,215,000,000,000,000,000,000,000,000,000
76,065,276,860,173,150,000,000,000,000,000B
score_11
float64
-1,779,355,047,660,215,000,000,000,000,000,000,000,000,000
66,177,685,061,550,990,000,000,000,000,000B
score_12
float64
-1,779,355,047,660,215,000,000,000,000,000,000,000,000,000
58,318,764,282,646,120,000,000,000,000,000B
score_13
float64
-2,081,788,007,204,393,000,000,000,000,000,000,000,000,000
57,465,504,162,802,390,000,000,000,000,000B
New directions in cryptography Two kinds of contemporary developments in cryptography are examined. Widening applications of teleprocessing have given rise to a need for new types of cryptographic systems, which minimize the need for secure key distribution channels and supply the equivalent of a written signature. This paper suggests ways to solve these currently open problems. It also discusses how the theories of communication and computation are beginning to provide the tools to solve cryptographic problems of long standing.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
A Comparative Analysis of Selection Schemes Used in Genetic Algorithms This paper considers a number of selection schemes commonly used in modern genetic algorithms. Specifically, proportionate reproduction, rank- ing selection, tournament selection, and Genitor (or «steady state") selec- tion are compared on the basis of solutions to deterministic difference or differential equations, which are verified through computer simulations. The analysis provides convenient approximate or exact solutions as well as useful convergence time and growth ratio estimates. The paper rec- ommends practical application of the analyses and suggests a number of paths for more detailed analytical investigation of selection techniques.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Memetic Algorithms for Continuous Optimisation Based on Local Search Chains Memetic algorithms with continuous local search methods have arisen as effective tools to address the difficulty of obtaining reliable solutions of high precision for complex continuous optimisation problems. There exists a group of continuous local search algorithms that stand out as exceptional local search optimisers. However, on some occasions, they may become very expensive, because of the way they exploit local information to guide the search process. In this paper, they are called intensive continuous local search methods. Given the potential of this type of local optimisation methods, it is interesting to build prospective memetic algorithm models with them. This paper presents the concept of local search chain as a springboard to design memetic algorithm approaches that can effectively use intense continuous local search methods as local search operators. Local search chain concerns the idea that, at one stage, the local search operator may continue the operation of a previous invocation, starting from the final configuration (initial solution, strategy parameter values, internal variables, etc.) reached by this one. The proposed memetic algorithm favours the formation of local search chains during the memetic algorithm run with the aim of concentrating local tuning in search regions showing promise. In order to study the performance of the new memetic algorithm model, an instance is implemented with CMA-ES as an intense local search method. The benefits of the proposal in comparison to other kinds of memetic algorithms and evolutionary algorithms proposed in the literature to deal with continuous optimisation problems are experimentally shown. Concretely, the empirical study reveals a clear superiority when tackling high-dimensional problems.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.00186
0.001143
0.000761
0.000586
0.000487
0.000358
0.000275
0.000162
0.000081
0.000059
0.00005
0.000044
0.000043
Graph-Based Algorithms for Boolean Function Manipulation In this paper we present a new data structure for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner similar to the representations introduced by Lee [1] and Akers [2], but with further restrictions on the ordering of decision variables in the graph. Although a function requires, in the worst case, a graph of size exponential in the number of arguments, many of the functions encountered in typical applications have a more reasonable representation. Our algorithms have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large. We present experimental results from applying these algorithms to problems in logic design verification that demonstrate the practicality of our approach.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
Dynamic program slicing Program slices are useful in debugging, testing, maintenance, and understanding of programs. The conventional notion of a program slice, the static slice, is the set of all statements that might affect the value of a given variable occurrence. In this paper, we investigate the concept of the dynamic slice consisting of all statements that actually affect the value of a variable occurrence for a given program input. The sensitivity of dynamic slicing to particular program inputs makes it more useful in program debugging and testing than static slicing. Several approaches for computing dynamic slices are examined. The notion of a Dynamic Dependence Graph and its use in computing dynamic slices is discussed. The Dynamic Dependence Graph may be unbounded in length; therefore, we introduce the economical concept of a Reduced Dynamic Dependence Graph, which is proportional in size to the number of dynamic slices arising during the program execution.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
The nature of statistical learning theory~. First Page of the Article
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
A model of saliency-based visual attention for rapid scene analysis A visual attention system, inspired by the behavior and the neuronal architecture of the early primate visual system, is presented. Multiscale image features are combined into a single topographical saliency map. A dynamical neural network then selects attended locations in order of decreasing saliency. The system breaks down the complex problem of scene understanding by rapidly selecting, in a computationally efficient manner, conspicuous locations to be analyzed in detail.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
DMM: fast map matching for cellular data Map matching for cellular data is to transform a sequence of cell tower locations to a trajectory on a road map. It is an essential processing step for many applications, such as traffic optimization and human mobility analysis. However, most current map matching approaches are based on Hidden Markov Models (HMMs) that have heavy computation overhead to consider high-order cell tower information. This paper presents a fast map matching framework for cellular data, named as DMM, which adopts a recurrent neural network (RNN) to identify the most-likely trajectory of roads given a sequence of cell towers. Once the RNN model is trained, it can process cell tower sequences as making RNN inference, resulting in fast map matching speed. To transform DMM into a practical system, several challenges are addressed by developing a set of techniques, including spatial-aware representation of input cell tower sequences, an encoder-decoder framework for map matching model with variable-length input and output, and a reinforcement learning based model for optimizing the matched outputs. Extensive experiments on a large-scale anonymized cellular dataset reveal that DMM provides high map matching accuracy (precision 80.43% and recall 85.42%) and reduces the average inference time of HMM-based approaches by 46.58×.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001885
0.001159
0.000772
0.000593
0.000493
0.000363
0.000278
0.000164
0.000082
0.00006
0.00005
0.000045
0.000044
Threaded code The concept of “threaded code” is presented as an alternative to machine language code. Hardware and software realizations of it are given. In software it is realized as interpretive code not needing an interpreter. Extensions and optimizations are mentioned.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
Towards Developing High Performance RISC-V Processors Using Agile Methodology While research has shown that the agile chip design methodology is promising to sustain the scaling of computing performance in a more efficient way, it is still of limited usage in actual applications due to two major obstacles: 1) Lack of tool-chain and developing framework supporting agile chip design, especially for large-scale modern processors. 2) The conventional verification methods are less agile and become a major bottleneck of the entire process. To tackle both issues, we propose MINJIE, an open-source platform supporting agile processor development flow. MINJIE integrates a broad set of tools for logic design, functional verification, performance modelling, pre-silicon validation and debugging for better development efficiency of state-of-the-art processor designs. We demonstrate the usage and effectiveness of MINJIE by building two generations of an open-source superscalar out-of-order RISC-V processor code-named XIANGSHAN using agile methodologies. We quantify the performance of XIANGSHAN using SPEC CPU2006 benchmarks and demonstrate that XIANGSHAN achieves industry-competitive performance.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000575
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
Signature Schemes and Anonymous Credentials from Bilinear Maps We propose a new and efficient signature scheme that is provably secure in the plain model. The security of our scheme is based on a discrete-logarithm-based assumption put forth by Lysyanskaya, Rivest, Sahai, and Wolf (LRSW) who also showed that it holds for generic groups and is independent of the decisional Diffie-Hellman assumption. We prove security of our scheme under the LRSW assumption for groups with bi-linear maps. We then show how our scheme can be used to construct efficient anonymous credential systems as well as group signature and identity escrow schemes. To this end, we provide efficient protocols that allow one to prove in zero-knowledge the knowledge of a signature on a committed (or encrypted) message and to obtain a signature on a committed message.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Relay-Assisted Cooperative Federated Learning Federated learning (FL) has recently emerged as a promising technology to enable artificial intelligence (AI) at the network edge, where distributed mobile devices collaboratively train a shared AI model under the coordination of an edge server. To significantly improve the communication efficiency of FL, over-the-air computation allows a large number of mobile devices to concurrently upload their local models by exploiting the superposition property of wireless multi-access channels. Due to wireless channel fading, the model aggregation error at the edge server is dominated by the weakest channel among all devices, causing severe straggler issues. In this paper, we propose a relay-assisted cooperative FL scheme to effectively address the straggler issue. In particular, we deploy multiple half-duplex relays to cooperatively assist the devices in uploading the local model updates to the edge server. The nature of the over-the-air computation poses system objectives and constraints that are distinct from those in traditional relay communication systems. Moreover, the strong coupling between the design variables renders the optimization of such a system challenging. To tackle the issue, we propose an alternating-optimization-based algorithm to optimize the transceiver and relay operation with low complexity. Then, we analyze the model aggregation error in a single-relay case and show that our relay-assisted scheme achieves a smaller error than the one without relays provided that the relay transmit power and the relay channel gains are sufficiently large. The analysis provides critical insights on relay deployment in the implementation of cooperative FL. Extensive numerical results show that our design achieves faster convergence compared with state-of-the-art schemes.
DMM: fast map matching for cellular data Map matching for cellular data is to transform a sequence of cell tower locations to a trajectory on a road map. It is an essential processing step for many applications, such as traffic optimization and human mobility analysis. However, most current map matching approaches are based on Hidden Markov Models (HMMs) that have heavy computation overhead to consider high-order cell tower information. This paper presents a fast map matching framework for cellular data, named as DMM, which adopts a recurrent neural network (RNN) to identify the most-likely trajectory of roads given a sequence of cell towers. Once the RNN model is trained, it can process cell tower sequences as making RNN inference, resulting in fast map matching speed. To transform DMM into a practical system, several challenges are addressed by developing a set of techniques, including spatial-aware representation of input cell tower sequences, an encoder-decoder framework for map matching model with variable-length input and output, and a reinforcement learning based model for optimizing the matched outputs. Extensive experiments on a large-scale anonymized cellular dataset reveal that DMM provides high map matching accuracy (precision 80.43% and recall 85.42%) and reduces the average inference time of HMM-based approaches by 46.58×.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.002015
0.001238
0.000825
0.000634
0.000527
0.000388
0.000297
0.000175
0.000088
0.000064
0.000054
0.000048
0.000047
NP-complete scheduling problems We show that the problem of finding an optimal schedule for a set of jobs is NP-complete even in the following two restricted cases.o(1)All jobs require one time unit. (2)All jobs require one or two time units, and there are only two processor resolving (in the negative a conjecture of R. L. Graham, Proc. SJCC, 1972, pp. 205-218). As a consequence, the general preemptive scheduling problem is also NP-complete. These results are tantamount to showing that the scheduling problems mentioned are intractable.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Relay-Assisted Cooperative Federated Learning Federated learning (FL) has recently emerged as a promising technology to enable artificial intelligence (AI) at the network edge, where distributed mobile devices collaboratively train a shared AI model under the coordination of an edge server. To significantly improve the communication efficiency of FL, over-the-air computation allows a large number of mobile devices to concurrently upload their local models by exploiting the superposition property of wireless multi-access channels. Due to wireless channel fading, the model aggregation error at the edge server is dominated by the weakest channel among all devices, causing severe straggler issues. In this paper, we propose a relay-assisted cooperative FL scheme to effectively address the straggler issue. In particular, we deploy multiple half-duplex relays to cooperatively assist the devices in uploading the local model updates to the edge server. The nature of the over-the-air computation poses system objectives and constraints that are distinct from those in traditional relay communication systems. Moreover, the strong coupling between the design variables renders the optimization of such a system challenging. To tackle the issue, we propose an alternating-optimization-based algorithm to optimize the transceiver and relay operation with low complexity. Then, we analyze the model aggregation error in a single-relay case and show that our relay-assisted scheme achieves a smaller error than the one without relays provided that the relay transmit power and the relay channel gains are sufficiently large. The analysis provides critical insights on relay deployment in the implementation of cooperative FL. Extensive numerical results show that our design achieves faster convergence compared with state-of-the-art schemes.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
The Complexity of Flowshop and Jobshop Scheduling NP-complete problems form an extensive equivalence class of combinatorial problems for which no nonenumerative algorithms are known. Our first result shows that determining a shortest-length schedule in an m-machine flowshop is NP-complete for m ≥ 3. For m = 2, there is an efficient algorithm for finding such schedules. The second result shows that determining a minimum mean-flow-time schedule in an m-machine flowshop is NP-complete for every m ≥ 2. Finally we show that the shortest-length schedule problem for an m-machine jobshop is NP-complete for every m ≥ 2. Our results are strong in that they hold whether the problem size is measured by number of tasks, number of bits required to express the task lengths, or by the sum of the task lengths.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Deep Residual Learning for Image Recognition Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers - 8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
A note on two problems in connexion with graphs We consider n points (nodes), some or all pairs of which are connected by a branch; the length of each branch is given. We restrict ourselves to the case where at least one path exists between any two nodes. We now consider two problems. Problem 1. Constrnct the tree of minimum total length between the n nodes. (A tree is a graph with one and only one path between every two nodes.) In the course of the construction that we present here, the branches are subdivided into three sets: I. the branches definitely assignec~ to the tree under construction (they will form a subtree) ; II. the branches from which the next branch to be added to set I, will be selected ; III. the remaining branches (rejected or not yet considered). The nodes are subdivided into two sets: A. the nodes connected by the branches of set I, B. the remaining nodes (one and only one branch of set II will lead to each of these nodes), We start the construction by choosing an arbitrary node as the only member of set A, and by placing all branches that end in this node in set II. To start with, set I is empty. From then onwards we perform the following two steps repeatedly. Step 1. The shortest branch of set II is removed from this set and added to
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
An analysis of stochastic shortest path problems We consider a stochastic version of the classical shortest path problem whereby for each node of a graph, we must choose a probability distribution over the set of successor nodes so as to reach a certain destination node with minimum expected cost. The costs of transition between successive nodes can be positive as well as negative. We prove natural generalizations of the standard results for the deterministic shortest path problem, and we extend the corresponding theory for undiscounted finite state Markovian decision problems by removing the usual restriction that costs are either all nonnegative or all nonpositive.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Relay-Assisted Cooperative Federated Learning Federated learning (FL) has recently emerged as a promising technology to enable artificial intelligence (AI) at the network edge, where distributed mobile devices collaboratively train a shared AI model under the coordination of an edge server. To significantly improve the communication efficiency of FL, over-the-air computation allows a large number of mobile devices to concurrently upload their local models by exploiting the superposition property of wireless multi-access channels. Due to wireless channel fading, the model aggregation error at the edge server is dominated by the weakest channel among all devices, causing severe straggler issues. In this paper, we propose a relay-assisted cooperative FL scheme to effectively address the straggler issue. In particular, we deploy multiple half-duplex relays to cooperatively assist the devices in uploading the local model updates to the edge server. The nature of the over-the-air computation poses system objectives and constraints that are distinct from those in traditional relay communication systems. Moreover, the strong coupling between the design variables renders the optimization of such a system challenging. To tackle the issue, we propose an alternating-optimization-based algorithm to optimize the transceiver and relay operation with low complexity. Then, we analyze the model aggregation error in a single-relay case and show that our relay-assisted scheme achieves a smaller error than the one without relays provided that the relay transmit power and the relay channel gains are sufficiently large. The analysis provides critical insights on relay deployment in the implementation of cooperative FL. Extensive numerical results show that our design achieves faster convergence compared with state-of-the-art schemes.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
Learning to Predict by the Methods of Temporal Differences This article introduces a class of incremental learning procedures specialized for prediction – that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Relay-Assisted Cooperative Federated Learning Federated learning (FL) has recently emerged as a promising technology to enable artificial intelligence (AI) at the network edge, where distributed mobile devices collaboratively train a shared AI model under the coordination of an edge server. To significantly improve the communication efficiency of FL, over-the-air computation allows a large number of mobile devices to concurrently upload their local models by exploiting the superposition property of wireless multi-access channels. Due to wireless channel fading, the model aggregation error at the edge server is dominated by the weakest channel among all devices, causing severe straggler issues. In this paper, we propose a relay-assisted cooperative FL scheme to effectively address the straggler issue. In particular, we deploy multiple half-duplex relays to cooperatively assist the devices in uploading the local model updates to the edge server. The nature of the over-the-air computation poses system objectives and constraints that are distinct from those in traditional relay communication systems. Moreover, the strong coupling between the design variables renders the optimization of such a system challenging. To tackle the issue, we propose an alternating-optimization-based algorithm to optimize the transceiver and relay operation with low complexity. Then, we analyze the model aggregation error in a single-relay case and show that our relay-assisted scheme achieves a smaller error than the one without relays provided that the relay transmit power and the relay channel gains are sufficiently large. The analysis provides critical insights on relay deployment in the implementation of cooperative FL. Extensive numerical results show that our design achieves faster convergence compared with state-of-the-art schemes.
DMM: fast map matching for cellular data Map matching for cellular data is to transform a sequence of cell tower locations to a trajectory on a road map. It is an essential processing step for many applications, such as traffic optimization and human mobility analysis. However, most current map matching approaches are based on Hidden Markov Models (HMMs) that have heavy computation overhead to consider high-order cell tower information. This paper presents a fast map matching framework for cellular data, named as DMM, which adopts a recurrent neural network (RNN) to identify the most-likely trajectory of roads given a sequence of cell towers. Once the RNN model is trained, it can process cell tower sequences as making RNN inference, resulting in fast map matching speed. To transform DMM into a practical system, several challenges are addressed by developing a set of techniques, including spatial-aware representation of input cell tower sequences, an encoder-decoder framework for map matching model with variable-length input and output, and a reinforcement learning based model for optimizing the matched outputs. Extensive experiments on a large-scale anonymized cellular dataset reveal that DMM provides high map matching accuracy (precision 80.43% and recall 85.42%) and reduces the average inference time of HMM-based approaches by 46.58×.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001869
0.001149
0.000765
0.000588
0.000489
0.00036
0.000276
0.000162
0.000082
0.000059
0.00005
0.000044
0.000043
Cognitive radio: brain-empowered wireless communications Cognitive radio is viewed as a novel approach for improving the utilization of a precious natural resource: the radio electromagnetic spectrum. The cognitive radio, built on a software-defined radio, is defined as an intelligent wireless communication system that is aware of its environment and uses the methodology of understanding-by-building to learn from the environment and adapt to statistical variations in the input stimuli, with two primary objectives in mind: · highly reliable communication whenever and wherever needed; · efficient utilization of the radio spectrum. Following the discussion of interference temperature as a new metric for the quantification and management of interference, the paper addresses three fundamental cognitive tasks. 1) Radio-scene analysis. 2) Channel-state estimation and predictive modeling. 3) Transmit-power control and dynamic spectrum management. This work also discusses the emergent behavior of cognitive radio.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Relay-Assisted Cooperative Federated Learning Federated learning (FL) has recently emerged as a promising technology to enable artificial intelligence (AI) at the network edge, where distributed mobile devices collaboratively train a shared AI model under the coordination of an edge server. To significantly improve the communication efficiency of FL, over-the-air computation allows a large number of mobile devices to concurrently upload their local models by exploiting the superposition property of wireless multi-access channels. Due to wireless channel fading, the model aggregation error at the edge server is dominated by the weakest channel among all devices, causing severe straggler issues. In this paper, we propose a relay-assisted cooperative FL scheme to effectively address the straggler issue. In particular, we deploy multiple half-duplex relays to cooperatively assist the devices in uploading the local model updates to the edge server. The nature of the over-the-air computation poses system objectives and constraints that are distinct from those in traditional relay communication systems. Moreover, the strong coupling between the design variables renders the optimization of such a system challenging. To tackle the issue, we propose an alternating-optimization-based algorithm to optimize the transceiver and relay operation with low complexity. Then, we analyze the model aggregation error in a single-relay case and show that our relay-assisted scheme achieves a smaller error than the one without relays provided that the relay transmit power and the relay channel gains are sufficiently large. The analysis provides critical insights on relay deployment in the implementation of cooperative FL. Extensive numerical results show that our design achieves faster convergence compared with state-of-the-art schemes.
DMM: fast map matching for cellular data Map matching for cellular data is to transform a sequence of cell tower locations to a trajectory on a road map. It is an essential processing step for many applications, such as traffic optimization and human mobility analysis. However, most current map matching approaches are based on Hidden Markov Models (HMMs) that have heavy computation overhead to consider high-order cell tower information. This paper presents a fast map matching framework for cellular data, named as DMM, which adopts a recurrent neural network (RNN) to identify the most-likely trajectory of roads given a sequence of cell towers. Once the RNN model is trained, it can process cell tower sequences as making RNN inference, resulting in fast map matching speed. To transform DMM into a practical system, several challenges are addressed by developing a set of techniques, including spatial-aware representation of input cell tower sequences, an encoder-decoder framework for map matching model with variable-length input and output, and a reinforcement learning based model for optimizing the matched outputs. Extensive experiments on a large-scale anonymized cellular dataset reveal that DMM provides high map matching accuracy (precision 80.43% and recall 85.42%) and reduces the average inference time of HMM-based approaches by 46.58×.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001893
0.001163
0.000775
0.000596
0.000495
0.000364
0.000279
0.000164
0.000083
0.00006
0.00005
0.000045
0.000044
Authenticated Multi-Party Key Agreement . We examine key agreement protocols providing (i) key authentication(ii) key confirmation and (iii) forward secrecy. Attacks arepresented against previous two-party key agreement schemes and we subsequentlypresent a protocol providing the properties listed above.A generalization of the Burmester-Desmedt (BD) model (Eurocrypt "94)for multi-party key agreement is given, allowing a transformation of anytwo-party key agreement protocol into a multi-party protocol. A multipartyscheme (based...
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
The Random Oracle Methodology, Revisited. We take a critical look at the relationship between the security of cryptographic schemes in the Random Oracle Model, and the security of the schemes that result from implementing the random oracle by so called "cryptographic hash functions".The main result of this article is a negative one: There exist signature and encryption schemes that are secure in the Random Oracle Model, but for which any implementation of the random oracle results in insecure schemes. In the process of devising the above schemes, we consider possible definitions for the notion of a "good implementation" of a random oracle, pointing out limitations and challenges.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
A Heuristic Model For Dynamic Flexible Job Shop Scheduling Problem Considering Variable Processing Times In real scheduling problems, unexpected changes may occur frequently such as changes in task features. These changes cause deviation from primary scheduling. In this article, a heuristic model, inspired from Artificial Bee Colony algorithm, is proposed for a dynamic flexible job-shop scheduling (DFJSP) problem. This problem consists of n jobs that should be processed by m machines and the processing time of jobs deviates from estimated times. The objective is near-optimal scheduling after any change in tasks in order to minimise the maximal completion time (Makespan). In the proposed model, first, scheduling is done according to the estimated processing times and then re-scheduling is performed after determining the exact ones considering machine set-up. In order to evaluate the performance of the proposed model, some numerical experiments are designed in small, medium and large sizes in different levels of changes in processing times and statistical results illustrate the efficiency of the proposed algorithm.
DMM: fast map matching for cellular data Map matching for cellular data is to transform a sequence of cell tower locations to a trajectory on a road map. It is an essential processing step for many applications, such as traffic optimization and human mobility analysis. However, most current map matching approaches are based on Hidden Markov Models (HMMs) that have heavy computation overhead to consider high-order cell tower information. This paper presents a fast map matching framework for cellular data, named as DMM, which adopts a recurrent neural network (RNN) to identify the most-likely trajectory of roads given a sequence of cell towers. Once the RNN model is trained, it can process cell tower sequences as making RNN inference, resulting in fast map matching speed. To transform DMM into a practical system, several challenges are addressed by developing a set of techniques, including spatial-aware representation of input cell tower sequences, an encoder-decoder framework for map matching model with variable-length input and output, and a reinforcement learning based model for optimizing the matched outputs. Extensive experiments on a large-scale anonymized cellular dataset reveal that DMM provides high map matching accuracy (precision 80.43% and recall 85.42%) and reduces the average inference time of HMM-based approaches by 46.58×.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.002118
0.001302
0.000867
0.000667
0.000554
0.000408
0.000313
0.000184
0.000092
0.000067
0.000056
0.00005
0.000049
Multiobjective evolutionary algorithms: a comparative case studyand the strength Pareto approach Evolutionary algorithms (EA's) are often well-suited for optimization problems involving several, often conflicting objectives. Since 1985, various evolutionary approaches to mul- tiobjective optimization have been developed that are capable of searching for multiple solutions concurrently in a single run. However, the few comparative studies of different methods pre- sented up to now remain mostly qualitative and are often re- stricted to a few approaches. In this paper, four multiobjective EA's are compared quantitatively where an extended 0/1 knap- sack problem is taken as a basis. Furthermore, we introduce a new evolutionary approach to multicriteria optimization, the Strength Pareto EA (SPEA), that combines several features of previous multiobjective EA's in a unique manner. It is character- ized by a) storing nondominated solutions externally in a second, continuously updated population, b) evaluating an individual's fitness dependent on the number of external nondominated points that dominate it, c) preserving population diversity using the Pareto dominance relationship, and d) incorporating a clustering procedure in order to reduce the nondominated set without destroying its characteristics. The proof-of-principle results ob- tained on two artificial problems as well as a larger problem, the synthesis of a digital hardware-software multiprocessor system, suggest that SPEA can be very effective in sampling from along the entire Pareto-optimal front and distributing the generated solutions over the tradeoff surface. Moreover, SPEA clearly out- performs the other four multiobjective EA's on the 0/1 knapsack problem.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Federated Learning for Channel Estimation in Conventional and RIS-Assisted Massive MIMO Machine learning (ML) has attracted a great research interest for physical layer design problems, such as channel estimation, thanks to its low complexity and robustness. Channel estimation via ML requires model training on a dataset, which usually includes the received pilot signals as input and channel data as output. In previous works, model training is mostly done via centralized learning (CL), where the whole training dataset is collected from the users at the base station (BS). This approach introduces huge communication overhead for data collection. In this paper, to address this challenge, we propose a federated learning (FL) framework for channel estimation. We design a convolutional neural network (CNN) trained on the local datasets of the users without sending them to the BS. We develop FL-based channel estimation schemes for both conventional and RIS (intelligent reflecting surface) assisted massive MIMO (multiple-input multiple-output) systems, where a single CNN is trained for two different datasets for both scenarios. We evaluate the performance for noisy and quantized model transmission and show that the proposed approach provides approximately 16 times lower overhead than CL, while maintaining satisfactory performance close to CL. Furthermore, the proposed architecture exhibits lower estimation error than the state-of-the-art ML-based schemes.
Relay-Assisted Cooperative Federated Learning Federated learning (FL) has recently emerged as a promising technology to enable artificial intelligence (AI) at the network edge, where distributed mobile devices collaboratively train a shared AI model under the coordination of an edge server. To significantly improve the communication efficiency of FL, over-the-air computation allows a large number of mobile devices to concurrently upload their local models by exploiting the superposition property of wireless multi-access channels. Due to wireless channel fading, the model aggregation error at the edge server is dominated by the weakest channel among all devices, causing severe straggler issues. In this paper, we propose a relay-assisted cooperative FL scheme to effectively address the straggler issue. In particular, we deploy multiple half-duplex relays to cooperatively assist the devices in uploading the local model updates to the edge server. The nature of the over-the-air computation poses system objectives and constraints that are distinct from those in traditional relay communication systems. Moreover, the strong coupling between the design variables renders the optimization of such a system challenging. To tackle the issue, we propose an alternating-optimization-based algorithm to optimize the transceiver and relay operation with low complexity. Then, we analyze the model aggregation error in a single-relay case and show that our relay-assisted scheme achieves a smaller error than the one without relays provided that the relay transmit power and the relay channel gains are sufficiently large. The analysis provides critical insights on relay deployment in the implementation of cooperative FL. Extensive numerical results show that our design achieves faster convergence compared with state-of-the-art schemes.
Predicting Node failure in cloud service systems. In recent years, many traditional software systems have migrated to cloud computing platforms and are provided as online services. The service quality matters because system failures could seriously affect business and user experience. A cloud service system typically contains a large number of computing nodes. In reality, nodes may fail and affect service availability. In this paper, we propose a failure prediction technique, which can predict the failure-proneness of a node in a cloud service system based on historical data, before node failure actually happens. The ability to predict faulty nodes enables the allocation and migration of virtual machines to the healthy nodes, therefore improving service availability. Predicting node failure in cloud service systems is challenging, because a node failure could be caused by a variety of reasons and reflected by many temporal and spatial signals. Furthermore, the failure data is highly imbalanced. To tackle these challenges, we propose MING, a novel technique that combines: 1) a LSTM model to incorporate the temporal data, 2) a Random Forest model to incorporate spatial data; 3) a ranking model that embeds the intermediate results of the two models as feature inputs and ranks the nodes by their failure-proneness, 4) a cost-sensitive function to identify the optimal threshold for selecting the faulty nodes. We evaluate our approach using real-world data collected from a cloud service system. The results confirm the effectiveness of the proposed approach. We have also successfully applied the proposed approach in real industrial practice.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001868
0.001148
0.000765
0.000588
0.000489
0.000359
0.000276
0.000162
0.000082
0.000059
0.00005
0.000044
0.000043
MOPSO: a proposal for multiple objective particle swarm optimization This paper introduces a proposal to extend the heuristic called "particle swarm optimization" (PSO) to deal with multiobjective optimization problems. Our approach uses the concept of Pareto dominance to determine the flight direction of a particle and it maintains previously found nondominated vectors in a global repository that is later used by other particles to guide their own flight. The approach is validated using several standard test functions from the specialized literature. Our results indicate that our approach is highly competitive with current evolutionary multiobjective optimization techniques.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
TCP-Illinois: A loss- and delay-based congestion control algorithm for high-speed networks We introduce a new congestion control algorithm, called TCP-Illinois, which has many desirable properties for implementation in (very) high-speed networks. TCP-Illinois is a sender side protocol, which modifies the AIMD algorithm of the standard TCP (Reno, NewReno or SACK) by adjusting the increment/decrement amounts based on delay information. By using both loss and delay as congestion signals, TCP-Illinois achieves a better throughput than the standard TCP for high-speed networks. To study its fairness and stability properties, we extend recently developed stochastic matrix models of TCP to accommodate window size backoff probabilities that are proportional to arrival rates when the network is congested. Using this model, TCP-Illinois is shown to allocate the network resource fairly as in the standard TCP. In addition, TCP-Illinois is shown to be compatible with the standard TCP when implemented in today's networks, and is shown to provide the right incentive for transition to the new protocol. We finally perform ns-2 simulations to validate its properties and demonstrate its performance.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
DMM: fast map matching for cellular data Map matching for cellular data is to transform a sequence of cell tower locations to a trajectory on a road map. It is an essential processing step for many applications, such as traffic optimization and human mobility analysis. However, most current map matching approaches are based on Hidden Markov Models (HMMs) that have heavy computation overhead to consider high-order cell tower information. This paper presents a fast map matching framework for cellular data, named as DMM, which adopts a recurrent neural network (RNN) to identify the most-likely trajectory of roads given a sequence of cell towers. Once the RNN model is trained, it can process cell tower sequences as making RNN inference, resulting in fast map matching speed. To transform DMM into a practical system, several challenges are addressed by developing a set of techniques, including spatial-aware representation of input cell tower sequences, an encoder-decoder framework for map matching model with variable-length input and output, and a reinforcement learning based model for optimizing the matched outputs. Extensive experiments on a large-scale anonymized cellular dataset reveal that DMM provides high map matching accuracy (precision 80.43% and recall 85.42%) and reduces the average inference time of HMM-based approaches by 46.58×.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.002015
0.001238
0.000825
0.000634
0.000527
0.000388
0.000298
0.000175
0.000088
0.000064
0.000054
0.000048
0.000047
CUBIC: a new TCP-friendly high-speed TCP variant CUBIC is a congestion control protocol for TCP (transmission control protocol) and the current default TCP algorithm in Linux. The protocol modifies the linear window growth function of existing TCP standards to be a cubic function in order to improve the scalability of TCP over fast and long distance networks. It also achieves more equitable bandwidth allocations among flows with different RTTs (round trip times) by making the window growth to be independent of RTT -- thus those flows grow their congestion window at the same rate. During steady state, CUBIC increases the window size aggressively when the window is far from the saturation point, and the slowly when it is close to the saturation point. This feature allows CUBIC to be very scalable when the bandwidth and delay product of the network is large, and at the same time, be highly stable and also fair to standard TCP flows. The implementation of CUBIC in Linux has gone through several upgrades. This paper documents its design, implementation, performance and evolution as the default TCP algorithm of Linux.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Non-Strict Cache Coherence: Exploiting Data-Race Tolerance in Emerging Applications Software distributed shared memory (DSM) platforms on networks of workstations tolerate large network latencies by employing one of several weak memory consistency models. Data-race tolerant applications, such as Genetic Algorithms (GAs), Probabilistic Inference, etc., offer an additional degree of freedom to tolerate network latency: they do not synchronize shared memory references, and behave correctly when supplied outdated shared data. However, these algorithms often have a high communication-to-computation ratio and can flood the network with messages in the presence of large message delays. We study the performance of controlled asynchronous implementations of these algorithms via the use of our previously proposed blocking Global Read memory access primitive. Global Read implements non-strict cache coherence by guaranteeing to return to the reader a shared datum value from within a specified staleness range. Experiments on an IBM SP2 multicomputer with an Ethernet show significant performance improvements for controlled asynchronous implementations. On a lightly loaded Ethernet network, most of the GA benchmarks see 30% to 40% improvement over the best competitor for 2 to 16 processors, while two of the Probabilistic Inference benchmarks see more than 80% improvement for two processors. As the network, load in-creases, the benefits of non-strict cache coherence increase significantly.
A Heuristic Model For Dynamic Flexible Job Shop Scheduling Problem Considering Variable Processing Times In real scheduling problems, unexpected changes may occur frequently such as changes in task features. These changes cause deviation from primary scheduling. In this article, a heuristic model, inspired from Artificial Bee Colony algorithm, is proposed for a dynamic flexible job-shop scheduling (DFJSP) problem. This problem consists of n jobs that should be processed by m machines and the processing time of jobs deviates from estimated times. The objective is near-optimal scheduling after any change in tasks in order to minimise the maximal completion time (Makespan). In the proposed model, first, scheduling is done according to the estimated processing times and then re-scheduling is performed after determining the exact ones considering machine set-up. In order to evaluate the performance of the proposed model, some numerical experiments are designed in small, medium and large sizes in different levels of changes in processing times and statistical results illustrate the efficiency of the proposed algorithm.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.002565
0.001576
0.00105
0.000807
0.000671
0.000493
0.000379
0.000223
0.000112
0.000081
0.000068
0.000061
0.000059
FAST TCP: Motivation, Architecture, Algorithms, Performance We describe FAST TCP, a new TCP congestion con- trol algorithm for high-speed long-latency networks, from design to implementation. We highlight the approach taken by FAST TCP to address the four difficulties which the current TCP implementa- tion has at large windows. We describe the architecture and sum- marize some of the algorithms implemented in our prototype. We characterize its equilibrium and stability properties. We evaluate it experimentally in terms of throughput, fairness, stability, and re- sponsiveness. Index Terms—FAST TCP, implementation, Internet congestion control, protocol design, stability analysis.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
A Heuristic Model For Dynamic Flexible Job Shop Scheduling Problem Considering Variable Processing Times In real scheduling problems, unexpected changes may occur frequently such as changes in task features. These changes cause deviation from primary scheduling. In this article, a heuristic model, inspired from Artificial Bee Colony algorithm, is proposed for a dynamic flexible job-shop scheduling (DFJSP) problem. This problem consists of n jobs that should be processed by m machines and the processing time of jobs deviates from estimated times. The objective is near-optimal scheduling after any change in tasks in order to minimise the maximal completion time (Makespan). In the proposed model, first, scheduling is done according to the estimated processing times and then re-scheduling is performed after determining the exact ones considering machine set-up. In order to evaluate the performance of the proposed model, some numerical experiments are designed in small, medium and large sizes in different levels of changes in processing times and statistical results illustrate the efficiency of the proposed algorithm.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.002155
0.001324
0.000882
0.000678
0.000564
0.000414
0.000318
0.000187
0.000094
0.000068
0.000057
0.000051
0.00005
Random early detection gateways for congestion avoidance This paper presents Random Early Detection (RED) gateways for congestion avoidance in packet-switched networks. The gateway detects incipient congestion by computing the av- erage queue size. The gateway could notify connections of con- gestion either by dropping packets arriving at the gateway or by setting a bit in packet headers. When the average queue size exceeds a preset threshold, the gateway drops or marks each arriving packet with a certain probability, where the exact probability is a function of the average queue size. RED gateways keep the average queue size low while allowing occasional bursts of packets in the queue. During congestion, the probability that the gateway notifies a particular connection to reduce its window is roughly proportional to that connection's share of the bandwidth through the gateway. RED gateways are designed to accompany a transport-layer congestion control protocol such as TCP. The RED gateway has no bias against bursty traffic and avoids the global synchronization of many con- nections decreasing their window at the same time. Simulations of a TCP/IP network are used to illustrate the performance of RED gateways.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
Performance assessment of multiobjective optimizers: an analysis and review An important issue in multiobjective optimization is the quantitative comparison of the performance of different algorithms. In the case of multiobjective evolutionary algorithms, the outcome is usually an approximation of the Pareto-optimal set, which is denoted as an approximation set, and therefore the question arises of how to evaluate the quality of approximation sets. Most popular are methods that assign each approximation set a vector of real numbers that reflect different aspects of the quality. Sometimes, pairs of approximation sets are also considered. In this study, we provide a rigorous analysis of the limitations underlying this type of quality assessment. To this end, a mathematical framework is developed which allows one to classify and discuss existing techniques.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Federated Learning for Channel Estimation in Conventional and RIS-Assisted Massive MIMO Machine learning (ML) has attracted a great research interest for physical layer design problems, such as channel estimation, thanks to its low complexity and robustness. Channel estimation via ML requires model training on a dataset, which usually includes the received pilot signals as input and channel data as output. In previous works, model training is mostly done via centralized learning (CL), where the whole training dataset is collected from the users at the base station (BS). This approach introduces huge communication overhead for data collection. In this paper, to address this challenge, we propose a federated learning (FL) framework for channel estimation. We design a convolutional neural network (CNN) trained on the local datasets of the users without sending them to the BS. We develop FL-based channel estimation schemes for both conventional and RIS (intelligent reflecting surface) assisted massive MIMO (multiple-input multiple-output) systems, where a single CNN is trained for two different datasets for both scenarios. We evaluate the performance for noisy and quantized model transmission and show that the proposed approach provides approximately 16 times lower overhead than CL, while maintaining satisfactory performance close to CL. Furthermore, the proposed architecture exhibits lower estimation error than the state-of-the-art ML-based schemes.
Relay-Assisted Cooperative Federated Learning Federated learning (FL) has recently emerged as a promising technology to enable artificial intelligence (AI) at the network edge, where distributed mobile devices collaboratively train a shared AI model under the coordination of an edge server. To significantly improve the communication efficiency of FL, over-the-air computation allows a large number of mobile devices to concurrently upload their local models by exploiting the superposition property of wireless multi-access channels. Due to wireless channel fading, the model aggregation error at the edge server is dominated by the weakest channel among all devices, causing severe straggler issues. In this paper, we propose a relay-assisted cooperative FL scheme to effectively address the straggler issue. In particular, we deploy multiple half-duplex relays to cooperatively assist the devices in uploading the local model updates to the edge server. The nature of the over-the-air computation poses system objectives and constraints that are distinct from those in traditional relay communication systems. Moreover, the strong coupling between the design variables renders the optimization of such a system challenging. To tackle the issue, we propose an alternating-optimization-based algorithm to optimize the transceiver and relay operation with low complexity. Then, we analyze the model aggregation error in a single-relay case and show that our relay-assisted scheme achieves a smaller error than the one without relays provided that the relay transmit power and the relay channel gains are sufficiently large. The analysis provides critical insights on relay deployment in the implementation of cooperative FL. Extensive numerical results show that our design achieves faster convergence compared with state-of-the-art schemes.
Predicting Node failure in cloud service systems. In recent years, many traditional software systems have migrated to cloud computing platforms and are provided as online services. The service quality matters because system failures could seriously affect business and user experience. A cloud service system typically contains a large number of computing nodes. In reality, nodes may fail and affect service availability. In this paper, we propose a failure prediction technique, which can predict the failure-proneness of a node in a cloud service system based on historical data, before node failure actually happens. The ability to predict faulty nodes enables the allocation and migration of virtual machines to the healthy nodes, therefore improving service availability. Predicting node failure in cloud service systems is challenging, because a node failure could be caused by a variety of reasons and reflected by many temporal and spatial signals. Furthermore, the failure data is highly imbalanced. To tackle these challenges, we propose MING, a novel technique that combines: 1) a LSTM model to incorporate the temporal data, 2) a Random Forest model to incorporate spatial data; 3) a ranking model that embeds the intermediate results of the two models as feature inputs and ranks the nodes by their failure-proneness, 4) a cost-sensitive function to identify the optimal threshold for selecting the faulty nodes. We evaluate our approach using real-world data collected from a cloud service system. The results confirm the effectiveness of the proposed approach. We have also successfully applied the proposed approach in real industrial practice.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001942
0.001193
0.000795
0.000611
0.000508
0.000373
0.000287
0.000169
0.000085
0.000062
0.000052
0.000046
0.000045
Indicator-Based Selection in Multiobjective Search This paper discusses how preference information of the decision maker can in general be integrated into multiobjective search. The main idea is to first define the optimization goal in terms of a binary performance measure (indicator) and then to directly use this measure in the selection process. To this end, we propose a general indicator-based evolutionary algorithm (IBEA) that can be combined with arbitrary indicators. In contrast to existing algorithms, IBEA can be adapted to the preferences of the user and moreover does not require any additional diversity preservation mechanism such as fitness sharing to be used. It is shown on several continuous and discrete benchmark problems that IBEA can substantially improve on the results generated by two popular algorithms, namely NSCA-II and SPEA2, with respect to different performance measures.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Federated Learning for Channel Estimation in Conventional and RIS-Assisted Massive MIMO Machine learning (ML) has attracted a great research interest for physical layer design problems, such as channel estimation, thanks to its low complexity and robustness. Channel estimation via ML requires model training on a dataset, which usually includes the received pilot signals as input and channel data as output. In previous works, model training is mostly done via centralized learning (CL), where the whole training dataset is collected from the users at the base station (BS). This approach introduces huge communication overhead for data collection. In this paper, to address this challenge, we propose a federated learning (FL) framework for channel estimation. We design a convolutional neural network (CNN) trained on the local datasets of the users without sending them to the BS. We develop FL-based channel estimation schemes for both conventional and RIS (intelligent reflecting surface) assisted massive MIMO (multiple-input multiple-output) systems, where a single CNN is trained for two different datasets for both scenarios. We evaluate the performance for noisy and quantized model transmission and show that the proposed approach provides approximately 16 times lower overhead than CL, while maintaining satisfactory performance close to CL. Furthermore, the proposed architecture exhibits lower estimation error than the state-of-the-art ML-based schemes.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001918
0.001179
0.000785
0.000604
0.000502
0.000369
0.000283
0.000167
0.000084
0.000061
0.000051
0.000045
0.000044
PVS: A Prototype Verification System Without Abstract
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Relay-Assisted Cooperative Federated Learning Federated learning (FL) has recently emerged as a promising technology to enable artificial intelligence (AI) at the network edge, where distributed mobile devices collaboratively train a shared AI model under the coordination of an edge server. To significantly improve the communication efficiency of FL, over-the-air computation allows a large number of mobile devices to concurrently upload their local models by exploiting the superposition property of wireless multi-access channels. Due to wireless channel fading, the model aggregation error at the edge server is dominated by the weakest channel among all devices, causing severe straggler issues. In this paper, we propose a relay-assisted cooperative FL scheme to effectively address the straggler issue. In particular, we deploy multiple half-duplex relays to cooperatively assist the devices in uploading the local model updates to the edge server. The nature of the over-the-air computation poses system objectives and constraints that are distinct from those in traditional relay communication systems. Moreover, the strong coupling between the design variables renders the optimization of such a system challenging. To tackle the issue, we propose an alternating-optimization-based algorithm to optimize the transceiver and relay operation with low complexity. Then, we analyze the model aggregation error in a single-relay case and show that our relay-assisted scheme achieves a smaller error than the one without relays provided that the relay transmit power and the relay channel gains are sufficiently large. The analysis provides critical insights on relay deployment in the implementation of cooperative FL. Extensive numerical results show that our design achieves faster convergence compared with state-of-the-art schemes.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
Petri nets: Properties, analysis and applications Starts with a brief review of the history and the application areas considered in the literature. The author then proceeds with introductory modeling examples, behavioral and structural properties, three methods of analysis, subclasses of Petri nets and their analysis. In particular, one section is devoted to marked graphs, the concurrent system model most amenable to analysis. Introductory discussions on stochastic nets with their application to performance modeling, and on high-level nets with their application to logic programming, are provided. Also included are recent results on reachability criteria. Suggestions are provided for further reading on many subject areas of Petri nets
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
A Heuristic Model For Dynamic Flexible Job Shop Scheduling Problem Considering Variable Processing Times In real scheduling problems, unexpected changes may occur frequently such as changes in task features. These changes cause deviation from primary scheduling. In this article, a heuristic model, inspired from Artificial Bee Colony algorithm, is proposed for a dynamic flexible job-shop scheduling (DFJSP) problem. This problem consists of n jobs that should be processed by m machines and the processing time of jobs deviates from estimated times. The objective is near-optimal scheduling after any change in tasks in order to minimise the maximal completion time (Makespan). In the proposed model, first, scheduling is done according to the estimated processing times and then re-scheduling is performed after determining the exact ones considering machine set-up. In order to evaluate the performance of the proposed model, some numerical experiments are designed in small, medium and large sizes in different levels of changes in processing times and statistical results illustrate the efficiency of the proposed algorithm.
Predicting Node failure in cloud service systems. In recent years, many traditional software systems have migrated to cloud computing platforms and are provided as online services. The service quality matters because system failures could seriously affect business and user experience. A cloud service system typically contains a large number of computing nodes. In reality, nodes may fail and affect service availability. In this paper, we propose a failure prediction technique, which can predict the failure-proneness of a node in a cloud service system based on historical data, before node failure actually happens. The ability to predict faulty nodes enables the allocation and migration of virtual machines to the healthy nodes, therefore improving service availability. Predicting node failure in cloud service systems is challenging, because a node failure could be caused by a variety of reasons and reflected by many temporal and spatial signals. Furthermore, the failure data is highly imbalanced. To tackle these challenges, we propose MING, a novel technique that combines: 1) a LSTM model to incorporate the temporal data, 2) a Random Forest model to incorporate spatial data; 3) a ranking model that embeds the intermediate results of the two models as feature inputs and ranks the nodes by their failure-proneness, 4) a cost-sensitive function to identify the optimal threshold for selecting the faulty nodes. We evaluate our approach using real-world data collected from a cloud service system. The results confirm the effectiveness of the proposed approach. We have also successfully applied the proposed approach in real industrial practice.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.002332
0.001433
0.000955
0.000734
0.00061
0.000449
0.000344
0.000203
0.000102
0.000074
0.000062
0.000055
0.000054
Image quality assessment: from error visibility to structural similarity. Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a Structural Similarity Index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
A Heuristic Model For Dynamic Flexible Job Shop Scheduling Problem Considering Variable Processing Times In real scheduling problems, unexpected changes may occur frequently such as changes in task features. These changes cause deviation from primary scheduling. In this article, a heuristic model, inspired from Artificial Bee Colony algorithm, is proposed for a dynamic flexible job-shop scheduling (DFJSP) problem. This problem consists of n jobs that should be processed by m machines and the processing time of jobs deviates from estimated times. The objective is near-optimal scheduling after any change in tasks in order to minimise the maximal completion time (Makespan). In the proposed model, first, scheduling is done according to the estimated processing times and then re-scheduling is performed after determining the exact ones considering machine set-up. In order to evaluate the performance of the proposed model, some numerical experiments are designed in small, medium and large sizes in different levels of changes in processing times and statistical results illustrate the efficiency of the proposed algorithm.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.002188
0.001345
0.000895
0.000689
0.000572
0.000421
0.000323
0.00019
0.000096
0.000069
0.000058
0.000052
0.000051
An Analysis of Temporal-Difference Learning with Function Approximation We discuss the temporal-difference learning algo- rithm, as applied to approximating the cost-to-go function of an infinite-horizon discounted Markov chain. The algorithm we analyze updates parameters of a linear function approximator on- line during a single endless trajectory of an irreducible aperiodic Markov chain with a finite or infinite state space. We present a proof of convergence (with probability one), a characterization of the limit of convergence, and a bound on the resulting approxi- mation error. Furthermore, our analysis is based on a new line of reasoning that provides new intuition about the dynamics of temporal-difference learning. In addition to proving new and stronger positive results than those previously available, we identify the significance of on- line updating and potential hazards associated with the use of nonlinear function approximators. First, we prove that diver- gence may occur when updates are not based on trajectories of the Markov chain. This fact reconciles positive and negative results that have been discussed in the literature, regarding the soundness of temporal-difference learning. Second, we present an example illustrating the possibility of divergence when temporal- difference learning is used in the presence of a nonlinear function approximator.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Federated Learning for Channel Estimation in Conventional and RIS-Assisted Massive MIMO Machine learning (ML) has attracted a great research interest for physical layer design problems, such as channel estimation, thanks to its low complexity and robustness. Channel estimation via ML requires model training on a dataset, which usually includes the received pilot signals as input and channel data as output. In previous works, model training is mostly done via centralized learning (CL), where the whole training dataset is collected from the users at the base station (BS). This approach introduces huge communication overhead for data collection. In this paper, to address this challenge, we propose a federated learning (FL) framework for channel estimation. We design a convolutional neural network (CNN) trained on the local datasets of the users without sending them to the BS. We develop FL-based channel estimation schemes for both conventional and RIS (intelligent reflecting surface) assisted massive MIMO (multiple-input multiple-output) systems, where a single CNN is trained for two different datasets for both scenarios. We evaluate the performance for noisy and quantized model transmission and show that the proposed approach provides approximately 16 times lower overhead than CL, while maintaining satisfactory performance close to CL. Furthermore, the proposed architecture exhibits lower estimation error than the state-of-the-art ML-based schemes.
Relay-Assisted Cooperative Federated Learning Federated learning (FL) has recently emerged as a promising technology to enable artificial intelligence (AI) at the network edge, where distributed mobile devices collaboratively train a shared AI model under the coordination of an edge server. To significantly improve the communication efficiency of FL, over-the-air computation allows a large number of mobile devices to concurrently upload their local models by exploiting the superposition property of wireless multi-access channels. Due to wireless channel fading, the model aggregation error at the edge server is dominated by the weakest channel among all devices, causing severe straggler issues. In this paper, we propose a relay-assisted cooperative FL scheme to effectively address the straggler issue. In particular, we deploy multiple half-duplex relays to cooperatively assist the devices in uploading the local model updates to the edge server. The nature of the over-the-air computation poses system objectives and constraints that are distinct from those in traditional relay communication systems. Moreover, the strong coupling between the design variables renders the optimization of such a system challenging. To tackle the issue, we propose an alternating-optimization-based algorithm to optimize the transceiver and relay operation with low complexity. Then, we analyze the model aggregation error in a single-relay case and show that our relay-assisted scheme achieves a smaller error than the one without relays provided that the relay transmit power and the relay channel gains are sufficiently large. The analysis provides critical insights on relay deployment in the implementation of cooperative FL. Extensive numerical results show that our design achieves faster convergence compared with state-of-the-art schemes.
DMM: fast map matching for cellular data Map matching for cellular data is to transform a sequence of cell tower locations to a trajectory on a road map. It is an essential processing step for many applications, such as traffic optimization and human mobility analysis. However, most current map matching approaches are based on Hidden Markov Models (HMMs) that have heavy computation overhead to consider high-order cell tower information. This paper presents a fast map matching framework for cellular data, named as DMM, which adopts a recurrent neural network (RNN) to identify the most-likely trajectory of roads given a sequence of cell towers. Once the RNN model is trained, it can process cell tower sequences as making RNN inference, resulting in fast map matching speed. To transform DMM into a practical system, several challenges are addressed by developing a set of techniques, including spatial-aware representation of input cell tower sequences, an encoder-decoder framework for map matching model with variable-length input and output, and a reinforcement learning based model for optimizing the matched outputs. Extensive experiments on a large-scale anonymized cellular dataset reveal that DMM provides high map matching accuracy (precision 80.43% and recall 85.42%) and reduces the average inference time of HMM-based approaches by 46.58×.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.002167
0.001332
0.000887
0.000682
0.000567
0.000417
0.00032
0.000188
0.000095
0.000069
0.000058
0.000051
0.00005
Prediction of silicon content in hot metal using support vector regression based on chaos particle swarm optimization The prediction of silicon content in hot metal has been a major study subject as one of the most important means for the monitoring state in ferrous metallurgy industry. A prediction model of silicon content is established based on the support vector regression (SVR) whose optimal parameters are selected by chaos particle swarm optimization. The data of the model are collected from No. 3 BF in Panzhihua Iron and Steel Group Co. of China. The results show that the proposed prediction model has better prediction results than neural network trained by chaos particle swarm optimization and least squares support vector regression, the percentage of samples whose absolute prediction errors are less than 0.03 when predicting silicon content by the proposed model is higher than 90%, it indicates that the prediction precision can meet the requirement of practical production.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
DMM: fast map matching for cellular data Map matching for cellular data is to transform a sequence of cell tower locations to a trajectory on a road map. It is an essential processing step for many applications, such as traffic optimization and human mobility analysis. However, most current map matching approaches are based on Hidden Markov Models (HMMs) that have heavy computation overhead to consider high-order cell tower information. This paper presents a fast map matching framework for cellular data, named as DMM, which adopts a recurrent neural network (RNN) to identify the most-likely trajectory of roads given a sequence of cell towers. Once the RNN model is trained, it can process cell tower sequences as making RNN inference, resulting in fast map matching speed. To transform DMM into a practical system, several challenges are addressed by developing a set of techniques, including spatial-aware representation of input cell tower sequences, an encoder-decoder framework for map matching model with variable-length input and output, and a reinforcement learning based model for optimizing the matched outputs. Extensive experiments on a large-scale anonymized cellular dataset reveal that DMM provides high map matching accuracy (precision 80.43% and recall 85.42%) and reduces the average inference time of HMM-based approaches by 46.58×.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001893
0.001164
0.000775
0.000596
0.000495
0.000365
0.00028
0.000164
0.000083
0.00006
0.00005
0.000045
0.000044
Neuronlike adaptive elements that can solve difficult learning control problems
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
Empirically derived analytic models of wide-area TCP connections Analyzes 3 million TCP connections that occurred during 15 wide-area traffic traces. The traces were gathered at five “stub” networks and two internetwork gateways, providing a diverse look at wide-area traffic. The author derives analytic models describing the random variables associated with TELNET, NNTP, SMTP, and FTP connections. To assess these models the author presents a quantitative methodology for comparing their effectiveness with that of empirical models such as Tcplib [Danzig and Jamin, 1991]. The methodology also allows to determine which random variables show significant variation from site to site, over time, or between stub networks and internetwork gateways. Overall the author finds that the analytic models provide good descriptions, and generally model the various distributions as well as empirical models
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
A Heuristic Model For Dynamic Flexible Job Shop Scheduling Problem Considering Variable Processing Times In real scheduling problems, unexpected changes may occur frequently such as changes in task features. These changes cause deviation from primary scheduling. In this article, a heuristic model, inspired from Artificial Bee Colony algorithm, is proposed for a dynamic flexible job-shop scheduling (DFJSP) problem. This problem consists of n jobs that should be processed by m machines and the processing time of jobs deviates from estimated times. The objective is near-optimal scheduling after any change in tasks in order to minimise the maximal completion time (Makespan). In the proposed model, first, scheduling is done according to the estimated processing times and then re-scheduling is performed after determining the exact ones considering machine set-up. In order to evaluate the performance of the proposed model, some numerical experiments are designed in small, medium and large sizes in different levels of changes in processing times and statistical results illustrate the efficiency of the proposed algorithm.
DMM: fast map matching for cellular data Map matching for cellular data is to transform a sequence of cell tower locations to a trajectory on a road map. It is an essential processing step for many applications, such as traffic optimization and human mobility analysis. However, most current map matching approaches are based on Hidden Markov Models (HMMs) that have heavy computation overhead to consider high-order cell tower information. This paper presents a fast map matching framework for cellular data, named as DMM, which adopts a recurrent neural network (RNN) to identify the most-likely trajectory of roads given a sequence of cell towers. Once the RNN model is trained, it can process cell tower sequences as making RNN inference, resulting in fast map matching speed. To transform DMM into a practical system, several challenges are addressed by developing a set of techniques, including spatial-aware representation of input cell tower sequences, an encoder-decoder framework for map matching model with variable-length input and output, and a reinforcement learning based model for optimizing the matched outputs. Extensive experiments on a large-scale anonymized cellular dataset reveal that DMM provides high map matching accuracy (precision 80.43% and recall 85.42%) and reduces the average inference time of HMM-based approaches by 46.58×.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.002118
0.001302
0.000867
0.000667
0.000554
0.000408
0.000313
0.000184
0.000092
0.000067
0.000056
0.00005
0.000049
Distinctive Image Features from Scale-Invariant Keypoints This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Object Detection with Deep Learning: A Review. Due to object detection's close relationship with video analysis and image understanding, it has attracted much research attention in recent years. Traditional object detection methods are built on handcrafted features and shallow trainable architectures. Their performance easily stagnates by constructing complex ensembles that combine multiple low-level image features with high-level context from object detectors and scene classifiers. With the rapid development in deep learning, more powerful tools, which are able to learn semantic, high-level, deeper features, are introduced to address the problems existing in traditional architectures. These models behave differently in network architecture, training strategy, and optimization function. In this paper, we provide a review of deep learning-based object detection frameworks. Our review begins with a brief introduction on the history of deep learning and its representative tool, namely, the convolutional neural network. Then, we focus on typical generic object detection architectures along with some modifications and useful tricks to improve detection performance further. As distinct specific detection tasks exhibit different characteristics, we also briefly survey several specific tasks, including salient object detection, face detection, and pedestrian detection. Experimental analyses are also provided to compare various methods and draw some meaningful conclusions. Finally, several promising directions and tasks are provided to serve as guidelines for future work in both object detection and relevant neural network-based learning systems.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
Z3: An Efficient SMT Solver Satisfiability Modulo Theories (SMT) problem is a decision problem for logical first order formulas with respect to combinations of background theories such as: arithmetic, bit-vectors, arrays, and unin- terpreted functions. Z3 is a new and efficient SMT Solver freely available from Microsoft Research. It is used in various software verification and analysis applications.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Relay-Assisted Cooperative Federated Learning Federated learning (FL) has recently emerged as a promising technology to enable artificial intelligence (AI) at the network edge, where distributed mobile devices collaboratively train a shared AI model under the coordination of an edge server. To significantly improve the communication efficiency of FL, over-the-air computation allows a large number of mobile devices to concurrently upload their local models by exploiting the superposition property of wireless multi-access channels. Due to wireless channel fading, the model aggregation error at the edge server is dominated by the weakest channel among all devices, causing severe straggler issues. In this paper, we propose a relay-assisted cooperative FL scheme to effectively address the straggler issue. In particular, we deploy multiple half-duplex relays to cooperatively assist the devices in uploading the local model updates to the edge server. The nature of the over-the-air computation poses system objectives and constraints that are distinct from those in traditional relay communication systems. Moreover, the strong coupling between the design variables renders the optimization of such a system challenging. To tackle the issue, we propose an alternating-optimization-based algorithm to optimize the transceiver and relay operation with low complexity. Then, we analyze the model aggregation error in a single-relay case and show that our relay-assisted scheme achieves a smaller error than the one without relays provided that the relay transmit power and the relay channel gains are sufficiently large. The analysis provides critical insights on relay deployment in the implementation of cooperative FL. Extensive numerical results show that our design achieves faster convergence compared with state-of-the-art schemes.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
Computing size-independent matrix problems on systolic array processors A methodology to transform dense to band matrices is presented in this paper. This transformation, is accomplished by triangular blocks partitioning, and allows the implementation of solutions to problems with any given size, by means of contraflow systolic arrays, originally proposed by H.T. Kung. Matrix-vector and matrix-matrix multiplications are the operations considered here.The proposed transformations allow the optimal utilization of processing elements (PEs) of the systolic array when dense matrix are operated. Every computation is made inside the array by using adequate feedback. The feedback delay time depends only on the systolic array size.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
Why systolic architectures? First Page of the Article
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
Simultaneously Applying Multiple Mutation Operators in Genetic Algorithms The mutation operation is critical to the success of genetic algorithms since it diversifies the search directions and avoids convergence to local optima. The earliest genetic algorithms use only one mutation operator in producing the next generation. Each problem, even each stage of the genetic process in a single problem, may require appropriately different mutation operators for best results. Determining which mutation operators should be used is quite difficult and is usually learned through experience or by trial-and-error. This paper proposes a new genetic algorithm, the dynamic mutation genetic algorithm, to resolve these difficulties. The dynamic mutation genetic algorithm simultaneously uses several mutation operators in producing the next generation. The mutation ratio of each operator changes according to evaluation results from the respective offspring it produces. Thus, the appropriate mutation operators can be expected to have increasingly greater effects on the genetic process. Experiments are reported that show the proposed algorithm performs better than most genetic algorithms with single mutation operators.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
A Heuristic Model For Dynamic Flexible Job Shop Scheduling Problem Considering Variable Processing Times In real scheduling problems, unexpected changes may occur frequently such as changes in task features. These changes cause deviation from primary scheduling. In this article, a heuristic model, inspired from Artificial Bee Colony algorithm, is proposed for a dynamic flexible job-shop scheduling (DFJSP) problem. This problem consists of n jobs that should be processed by m machines and the processing time of jobs deviates from estimated times. The objective is near-optimal scheduling after any change in tasks in order to minimise the maximal completion time (Makespan). In the proposed model, first, scheduling is done according to the estimated processing times and then re-scheduling is performed after determining the exact ones considering machine set-up. In order to evaluate the performance of the proposed model, some numerical experiments are designed in small, medium and large sizes in different levels of changes in processing times and statistical results illustrate the efficiency of the proposed algorithm.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.002405
0.001478
0.000984
0.000757
0.000629
0.000463
0.000355
0.000209
0.000105
0.000076
0.000064
0.000057
0.000056
On the security of public key protocols Recently the use of public key encryption to provide secure network communication has received considerable attention. Such public key systems are usually effective against passive eavesdroppers, who merely tap the lines and try to decipher the message. It has been pointed out, however, that an improperly designed protocol could be vulnerable to an active saboteur, one who may impersonate another user or alter the message being transmitted. Several models are formulated in which the security of protocols can be discussed precisely. Algorithms and characterizations that can be used to determine protocol security in these models are given.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
Wireless sensor networks: a survey This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Cooperative coevolution of Elman recurrent neural networks for chaotic time series prediction Cooperative coevolution decomposes a problem into subcomponents and employs evolutionary algorithms for solving them. Cooperative coevolution has been effective for evolving neural networks. Different problem decomposition methods in cooperative coevolution determine how a neural network is decomposed and encoded which affects its performance. A good problem decomposition method should provide enough diversity and also group interacting variables which are the synapses in the neural network. Neural networks have shown promising results in chaotic time series prediction. This work employs two problem decomposition methods for training Elman recurrent neural networks on chaotic time series problems. The Mackey-Glass, Lorenz and Sunspot time series are used to demonstrate the performance of the cooperative neuro-evolutionary methods. The results show improvement in performance in terms of accuracy when compared to some of the methods from literature.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
Design of cascade form FIR filters with discrete valued coefficients
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Relay-Assisted Cooperative Federated Learning Federated learning (FL) has recently emerged as a promising technology to enable artificial intelligence (AI) at the network edge, where distributed mobile devices collaboratively train a shared AI model under the coordination of an edge server. To significantly improve the communication efficiency of FL, over-the-air computation allows a large number of mobile devices to concurrently upload their local models by exploiting the superposition property of wireless multi-access channels. Due to wireless channel fading, the model aggregation error at the edge server is dominated by the weakest channel among all devices, causing severe straggler issues. In this paper, we propose a relay-assisted cooperative FL scheme to effectively address the straggler issue. In particular, we deploy multiple half-duplex relays to cooperatively assist the devices in uploading the local model updates to the edge server. The nature of the over-the-air computation poses system objectives and constraints that are distinct from those in traditional relay communication systems. Moreover, the strong coupling between the design variables renders the optimization of such a system challenging. To tackle the issue, we propose an alternating-optimization-based algorithm to optimize the transceiver and relay operation with low complexity. Then, we analyze the model aggregation error in a single-relay case and show that our relay-assisted scheme achieves a smaller error than the one without relays provided that the relay transmit power and the relay channel gains are sufficiently large. The analysis provides critical insights on relay deployment in the implementation of cooperative FL. Extensive numerical results show that our design achieves faster convergence compared with state-of-the-art schemes.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
The Recognition of Human Movement Using Temporal Templates A new view-based approach to the representation and recognition of human movement is presented. The basis of the representation is a temporal template驴a static vector-image where the vector value at each point is a function of the motion properties at the corresponding spatial location in an image sequence. Using aerobics exercises as a test domain, we explore the representational power of a simple, two component version of the templates: The first value is a binary value indicating the presence of motion and the second value is a function of the recency of motion in a sequence. We then develop a recognition method matching temporal templates against stored instances of views of known actions. The method automatically performs temporal segmentation, is invariant to linear changes in speed, and runs in real-time on standard platforms.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
Basic concepts and taxonomy of dependable and secure computing This paper gives the main definitions relating to dependability, a generic concept including a special case of such attributes as reliability, availability, safety, integrity, maintainability, etc. Security brings in concerns for confidentiality, in addition to availability and integrity. Basic definitions are given first. They are then commented upon, and supplemented by additional definitions, which address the threats to dependability and security (faults, errors, failures), their attributes, and the means for their achievement (fault prevention, fault tolerance, fault removal, fault forecasting). The aim is to explicate a set of general concepts, of relevance across a wide range of situations and, therefore, helping communication and cooperation among a number of scientific and technical communities, including ones that are concentrating on particular types of system, of system failures, or of causes of system failures.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
Quick detection of difficult bugs for effective post-silicon validation We present a new technique for systematically creating postsilicon validation tests that quickly detect bugs in processor cores and uncore components (cache controllers, memory controllers, on-chip networks) of multi-core System on Chips (SoCs). Such quick detection is essential because long error detection latency, the time elapsed between the occurrence of an error due to a bug and its manifestation as an observable failure, severely limits the effectiveness of existing post-silicon validation approaches. In addition, we provide a list of realistic bug scenarios abstracted from “difficult” bugs that occurred in commercial multi-core SoCs. Our results for an OpenSPARC T2-like multi-core SoC demonstrate: 1. Error detection latencies of “typical” post-silicon validation tests can be very long, up to billions of clock cycles, especially for bugs in uncore components. 2. Our new technique shortens error detection latencies by several orders of magnitude to only a few hundred cycles for most bug scenarios. 3. Our new technique enables 2-fold increase in bug coverage. An important feature of our technique is its software-only implementation without any hardware modification. Hence, it is readily applicable to existing designs.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
A Heuristic Model For Dynamic Flexible Job Shop Scheduling Problem Considering Variable Processing Times In real scheduling problems, unexpected changes may occur frequently such as changes in task features. These changes cause deviation from primary scheduling. In this article, a heuristic model, inspired from Artificial Bee Colony algorithm, is proposed for a dynamic flexible job-shop scheduling (DFJSP) problem. This problem consists of n jobs that should be processed by m machines and the processing time of jobs deviates from estimated times. The objective is near-optimal scheduling after any change in tasks in order to minimise the maximal completion time (Makespan). In the proposed model, first, scheduling is done according to the estimated processing times and then re-scheduling is performed after determining the exact ones considering machine set-up. In order to evaluate the performance of the proposed model, some numerical experiments are designed in small, medium and large sizes in different levels of changes in processing times and statistical results illustrate the efficiency of the proposed algorithm.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.002331
0.001433
0.000954
0.000734
0.000614
0.000448
0.000344
0.000202
0.000102
0.000074
0.000062
0.000055
0.000054
Wireless Information and Power Transfer: Architecture Design and Rate-Energy Tradeoff Simultaneous information and power transfer over the wireless channels potentially offers great convenience to mobile users. Yet practical receiver designs impose technical constraints on its hardware realization, as practical circuits for harvesting energy from radio signals are not yet able to decode the carried information directly. To make theoretical progress, we propose a general receiver operation, namely, dynamic power splitting (DPS), which splits the received signal with adjustable power ratio for energy harvesting and information decoding, separately. Three special cases of DPS, namely, time switching (TS), static power splitting (SPS) and on-off power splitting (OPS) are investigated. The TS and SPS schemes can be treated as special cases of OPS. Moreover, we propose two types of practical receiver architectures, namely, separated versus integrated information and energy receivers. The integrated receiver integrates the front-end components of the separated receiver, thus achieving a smaller form factor. The rate-energy tradeoff for the two architectures are characterized by a so-called rate-energy (R-E) region. The optimal transmission strategy is derived to achieve different rate-energy tradeoffs. With receiver circuit power consumption taken into account, it is shown that the OPS scheme is optimal for both receivers. For the ideal case when the receiver circuit does not consume power, the SPS scheme is optimal for both receivers. In addition, we study the performance for the two types of receivers under a realistic system setup that employs practical modulation. Our results provide useful insights to the optimal practical receiver design for simultaneous wireless information and power transfer (SWIPT).
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
DMM: fast map matching for cellular data Map matching for cellular data is to transform a sequence of cell tower locations to a trajectory on a road map. It is an essential processing step for many applications, such as traffic optimization and human mobility analysis. However, most current map matching approaches are based on Hidden Markov Models (HMMs) that have heavy computation overhead to consider high-order cell tower information. This paper presents a fast map matching framework for cellular data, named as DMM, which adopts a recurrent neural network (RNN) to identify the most-likely trajectory of roads given a sequence of cell towers. Once the RNN model is trained, it can process cell tower sequences as making RNN inference, resulting in fast map matching speed. To transform DMM into a practical system, several challenges are addressed by developing a set of techniques, including spatial-aware representation of input cell tower sequences, an encoder-decoder framework for map matching model with variable-length input and output, and a reinforcement learning based model for optimizing the matched outputs. Extensive experiments on a large-scale anonymized cellular dataset reveal that DMM provides high map matching accuracy (precision 80.43% and recall 85.42%) and reduces the average inference time of HMM-based approaches by 46.58×.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001885
0.001159
0.000771
0.000593
0.000493
0.000363
0.000278
0.000164
0.000082
0.00006
0.00005
0.000045
0.000044
A fast and elitist multiobjective genetic algorithm: NSGA-II Multi-objective evolutionary algorithms (MOEAs) that use non-dominated sorting and sharing have been criticized mainly for: (1) their O(MN3) computational complexity (where M is the number of objectives and N is the population size); (2) their non-elitism approach; and (3) the need to specify a sharing parameter. In this paper, we suggest a non-dominated sorting-based MOEA, called NSGA-II (Non-dominated Sorting Genetic Algorithm II), which alleviates all of the above three difficulties. Specifically, a fast non-dominated sorting approach with O(MN2) computational complexity is presented. Also, a selection operator is presented that creates a mating pool by combining the parent and offspring populations and selecting the best N solutions (with respect to fitness and spread). Simulation results on difficult test problems show that NSGA-II is able, for most problems, to find a much better spread of solutions and better convergence near the true Pareto-optimal front compared to the Pareto-archived evolution strategy and the strength-Pareto evolutionary algorithm - two other elitist MOEAs that pay special attention to creating a diverse Pareto-optimal front. Moreover, we modify the definition of dominance in order to solve constrained multi-objective problems efficiently. Simulation results of the constrained NSGA-II on a number of test problems, including a five-objective, seven-constraint nonlinear problem, are compared with another constrained multi-objective optimizer, and the much better performance of NSGA-II is observed
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Federated Learning for Channel Estimation in Conventional and RIS-Assisted Massive MIMO Machine learning (ML) has attracted a great research interest for physical layer design problems, such as channel estimation, thanks to its low complexity and robustness. Channel estimation via ML requires model training on a dataset, which usually includes the received pilot signals as input and channel data as output. In previous works, model training is mostly done via centralized learning (CL), where the whole training dataset is collected from the users at the base station (BS). This approach introduces huge communication overhead for data collection. In this paper, to address this challenge, we propose a federated learning (FL) framework for channel estimation. We design a convolutional neural network (CNN) trained on the local datasets of the users without sending them to the BS. We develop FL-based channel estimation schemes for both conventional and RIS (intelligent reflecting surface) assisted massive MIMO (multiple-input multiple-output) systems, where a single CNN is trained for two different datasets for both scenarios. We evaluate the performance for noisy and quantized model transmission and show that the proposed approach provides approximately 16 times lower overhead than CL, while maintaining satisfactory performance close to CL. Furthermore, the proposed architecture exhibits lower estimation error than the state-of-the-art ML-based schemes.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Predicting Node failure in cloud service systems. In recent years, many traditional software systems have migrated to cloud computing platforms and are provided as online services. The service quality matters because system failures could seriously affect business and user experience. A cloud service system typically contains a large number of computing nodes. In reality, nodes may fail and affect service availability. In this paper, we propose a failure prediction technique, which can predict the failure-proneness of a node in a cloud service system based on historical data, before node failure actually happens. The ability to predict faulty nodes enables the allocation and migration of virtual machines to the healthy nodes, therefore improving service availability. Predicting node failure in cloud service systems is challenging, because a node failure could be caused by a variety of reasons and reflected by many temporal and spatial signals. Furthermore, the failure data is highly imbalanced. To tackle these challenges, we propose MING, a novel technique that combines: 1) a LSTM model to incorporate the temporal data, 2) a Random Forest model to incorporate spatial data; 3) a ranking model that embeds the intermediate results of the two models as feature inputs and ranks the nodes by their failure-proneness, 4) a cost-sensitive function to identify the optimal threshold for selecting the faulty nodes. We evaluate our approach using real-world data collected from a cloud service system. The results confirm the effectiveness of the proposed approach. We have also successfully applied the proposed approach in real industrial practice.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001854
0.001139
0.000759
0.000584
0.000485
0.000357
0.000274
0.000161
0.000081
0.000059
0.000049
0.000044
0.000043
S2CBench: Synthesizable SystemC Benchmark Suite for High-Level Synthesis. High-level synthesis (HLS) is being increasingly used for commercial VLSI designs. This has led to the proliferation of many HLS tools. In order to evaluate their performance and functionalities, a standard benchmark suite in a common language supported by all of them is required. This letter presents a benchmark suite, which complies with the latest Synthesizable SystemC standard, called S2CBench...
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
Massive MIMO for next generation wireless systems Multi-user MIMO offers big advantages over conventional point-to-point MIMO: it works with cheap single-antenna terminals, a rich scattering environment is not required, and resource allocation is simplified because every active terminal utilizes all of the time-frequency bins. However, multi-user MIMO, as originally envisioned, with roughly equal numbers of service antennas and terminals and frequency-division duplex operation, is not a scalable technology. Massive MIMO (also known as large-scale antenna systems, very large MIMO, hyper MIMO, full-dimension MIMO, and ARGOS) makes a clean break with current practice through the use of a large excess of service antennas over active terminals and time-division duplex operation. Extra antennas help by focusing energy into ever smaller regions of space to bring huge improvements in throughput and radiated energy efficiency. Other benefits of massive MIMO include extensive use of inexpensive low-power components, reduced latency, simplification of the MAC layer, and robustness against intentional jamming. The anticipated throughput depends on the propagation environment providing asymptotically orthogonal channels to the terminals, but so far experiments have not disclosed any limitations in this regard. While massive MIMO renders many traditional research problems irrelevant, it uncovers entirely new problems that urgently need attention: the challenge of making many low-cost low-precision components that work effectively together, acquisition and synchronization for newly joined terminals, the exploitation of extra degrees of freedom provided by the excess of service antennas, reducing internal power consumption to achieve total energy efficiency reductions, and finding new deployment scenarios. This article presents an overview of the massive MIMO concept and contemporary research on the topic.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
A latent space-based estimation of distribution algorithm for large-scale global optimization Large-scale global optimization problems (LSGOs) have received considerable attention in the field of meta-heuristic algorithms. Estimation of distribution algorithms (EDAs) are a major branch of meta-heuristic algorithms. However, how to effectively build the probabilistic model for EDA in high dimensions is confronted with obstacle, making them less attractive due to the large computational requirements. To overcome the shortcomings of EDAs, this paper proposed a latent space-based EDA (LS-EDA), which transforms the multivariate probabilistic model of Gaussian-based EDA into its principal component latent subspace with lower dimensionality. LS-EDA can efficiently reduce the complexity of EDA while maintaining its probability model without losing key information to scale up its performance for LSGOs. When the original dimensions are projected to the latent subspace, those dimensions with larger projected value make more contribution to the optimization process. LS-EDA can also help recognize and understand the problem structure, especially for black-box optimization problems. Due to dimensionality reduction, its computational budget and population size can be effectively reduced while its performance is highly competitive in comparison with the state-of-the-art meta-heuristic algorithms for LSGOs. In order to understand the strengths and weaknesses of LS-EDA, we have carried out extensive computational studies. Our results revealed LS-EDA outperforms the others on the benchmark functions with overlap and nonseparate variables.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.002117
0.001301
0.000867
0.000667
0.000554
0.000407
0.000313
0.000184
0.000092
0.000067
0.000056
0.00005
0.000049
A Low-Complexity Analytical Modeling for Cross-Layer Adaptive Error Protection in Video Over WLAN We find a low-complicity and accurate model to solve the problem of optimizing MAC-layer transmission of real-time video over wireless local area networks (WLANs) using cross-layer techniques. The objective in this problem is to obtain the optimal MAC retry limit in order to minimize the total packet loss rate. First, the accuracy of Fluid and M/M/1/K analytical models is examined. Then we derive a closed-form expression for service time in WLAN MAC transmission, and will use this in mathematical formulation of our optimization problem based on M/G/1 model. Subsequently we introduce an approximate and simple formula for MAC-layer service time, which leads to the M/M/1 model. Compared with M/G/1, we particularly show that our M/M/1-based model provides a low-complexity and yet quite accurate means for analyzing MAC transmission process in WLAN. Using our M/M/1 model-based analysis, we derive closed-form formulas for the packet overflow drop rate and optimum retry-limit. These closed-form expressions can be effectively invoked for analyzing adaptive retry-limit algorithms. Simulation results (network simulator-2) will verify the accuracy of our analytical models.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
Vision meets robotics: The KITTI dataset We present a novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research. In total, we recorded 6 hours of traffic scenarios at 10-100 Hz using a variety of sensor modalities such as high-resolution color and grayscale stereo cameras, a Velodyne 3D laser scanner and a high-precision GPS/IMU inertial navigation system. The scenarios are diverse, capturing real-world traffic situations, and range from freeways over rural areas to inner-city scenes with many static and dynamic objects. Our data is calibrated, synchronized and timestamped, and we provide the rectified and raw image sequences. Our dataset also contains object labels in the form of 3D tracklets, and we provide online benchmarks for stereo, optical flow, object detection and other tasks. This paper describes our recording platform, the data format and the utilities that we provide.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Federated Learning for Channel Estimation in Conventional and RIS-Assisted Massive MIMO Machine learning (ML) has attracted a great research interest for physical layer design problems, such as channel estimation, thanks to its low complexity and robustness. Channel estimation via ML requires model training on a dataset, which usually includes the received pilot signals as input and channel data as output. In previous works, model training is mostly done via centralized learning (CL), where the whole training dataset is collected from the users at the base station (BS). This approach introduces huge communication overhead for data collection. In this paper, to address this challenge, we propose a federated learning (FL) framework for channel estimation. We design a convolutional neural network (CNN) trained on the local datasets of the users without sending them to the BS. We develop FL-based channel estimation schemes for both conventional and RIS (intelligent reflecting surface) assisted massive MIMO (multiple-input multiple-output) systems, where a single CNN is trained for two different datasets for both scenarios. We evaluate the performance for noisy and quantized model transmission and show that the proposed approach provides approximately 16 times lower overhead than CL, while maintaining satisfactory performance close to CL. Furthermore, the proposed architecture exhibits lower estimation error than the state-of-the-art ML-based schemes.
Deep Residual Learning for Image Recognition Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers - 8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001861
0.001144
0.000762
0.000587
0.000487
0.000358
0.000275
0.000162
0.000081
0.000059
0.00005
0.000044
0.000043
Caffe: Convolutional Architecture for Fast Feature Embedding Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models. The framework is a BSD-licensed C++ library with Python and MATLAB bindings for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures. Caffe fits industry and internet-scale media needs by CUDA GPU computation, processing over 40 million images a day on a single K40 or Titan GPU (approx 2 ms per image). By separating model representation from actual implementation, Caffe allows experimentation and seamless switching among platforms for ease of development and deployment from prototyping machines to cloud environments. Caffe is maintained and developed by the Berkeley Vision and Learning Center (BVLC) with the help of an active community of contributors on GitHub. It powers ongoing research projects, large-scale industrial applications, and startup prototypes in vision, speech, and multimedia.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Relay-Assisted Cooperative Federated Learning Federated learning (FL) has recently emerged as a promising technology to enable artificial intelligence (AI) at the network edge, where distributed mobile devices collaboratively train a shared AI model under the coordination of an edge server. To significantly improve the communication efficiency of FL, over-the-air computation allows a large number of mobile devices to concurrently upload their local models by exploiting the superposition property of wireless multi-access channels. Due to wireless channel fading, the model aggregation error at the edge server is dominated by the weakest channel among all devices, causing severe straggler issues. In this paper, we propose a relay-assisted cooperative FL scheme to effectively address the straggler issue. In particular, we deploy multiple half-duplex relays to cooperatively assist the devices in uploading the local model updates to the edge server. The nature of the over-the-air computation poses system objectives and constraints that are distinct from those in traditional relay communication systems. Moreover, the strong coupling between the design variables renders the optimization of such a system challenging. To tackle the issue, we propose an alternating-optimization-based algorithm to optimize the transceiver and relay operation with low complexity. Then, we analyze the model aggregation error in a single-relay case and show that our relay-assisted scheme achieves a smaller error than the one without relays provided that the relay transmit power and the relay channel gains are sufficiently large. The analysis provides critical insights on relay deployment in the implementation of cooperative FL. Extensive numerical results show that our design achieves faster convergence compared with state-of-the-art schemes.
Predicting Node failure in cloud service systems. In recent years, many traditional software systems have migrated to cloud computing platforms and are provided as online services. The service quality matters because system failures could seriously affect business and user experience. A cloud service system typically contains a large number of computing nodes. In reality, nodes may fail and affect service availability. In this paper, we propose a failure prediction technique, which can predict the failure-proneness of a node in a cloud service system based on historical data, before node failure actually happens. The ability to predict faulty nodes enables the allocation and migration of virtual machines to the healthy nodes, therefore improving service availability. Predicting node failure in cloud service systems is challenging, because a node failure could be caused by a variety of reasons and reflected by many temporal and spatial signals. Furthermore, the failure data is highly imbalanced. To tackle these challenges, we propose MING, a novel technique that combines: 1) a LSTM model to incorporate the temporal data, 2) a Random Forest model to incorporate spatial data; 3) a ranking model that embeds the intermediate results of the two models as feature inputs and ranks the nodes by their failure-proneness, 4) a cost-sensitive function to identify the optimal threshold for selecting the faulty nodes. We evaluate our approach using real-world data collected from a cloud service system. The results confirm the effectiveness of the proposed approach. We have also successfully applied the proposed approach in real industrial practice.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001838
0.001129
0.000752
0.000579
0.000481
0.000354
0.000271
0.00016
0.00008
0.000058
0.000049
0.000043
0.000042
Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
A latent space-based estimation of distribution algorithm for large-scale global optimization Large-scale global optimization problems (LSGOs) have received considerable attention in the field of meta-heuristic algorithms. Estimation of distribution algorithms (EDAs) are a major branch of meta-heuristic algorithms. However, how to effectively build the probabilistic model for EDA in high dimensions is confronted with obstacle, making them less attractive due to the large computational requirements. To overcome the shortcomings of EDAs, this paper proposed a latent space-based EDA (LS-EDA), which transforms the multivariate probabilistic model of Gaussian-based EDA into its principal component latent subspace with lower dimensionality. LS-EDA can efficiently reduce the complexity of EDA while maintaining its probability model without losing key information to scale up its performance for LSGOs. When the original dimensions are projected to the latent subspace, those dimensions with larger projected value make more contribution to the optimization process. LS-EDA can also help recognize and understand the problem structure, especially for black-box optimization problems. Due to dimensionality reduction, its computational budget and population size can be effectively reduced while its performance is highly competitive in comparison with the state-of-the-art meta-heuristic algorithms for LSGOs. In order to understand the strengths and weaknesses of LS-EDA, we have carried out extensive computational studies. Our results revealed LS-EDA outperforms the others on the benchmark functions with overlap and nonseparate variables.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.0021
0.001291
0.00086
0.000661
0.000549
0.000404
0.00031
0.000182
0.000092
0.000066
0.000056
0.00005
0.000049
A Survey of Research on Cloud Robotics and Automation The Cloud infrastructure and its extensive set of Internet-accessible resources has potential to provide significant benefits to robots and automation systems. We consider robots and automation systems that rely on data or code from a network to support their operation, i.e., where not all sensing, computation, and memory is integrated into a standalone system. This survey is organized around four potential benefits of the Cloud: 1) Big Data: access to libraries of images, maps, trajectories, and descriptive data; 2) Cloud Computing: access to parallel grid computing on demand for statistical analysis, learning, and motion planning; 3) Collective Robot Learning: robots sharing trajectories, control policies, and outcomes; and 4) Human Computation: use of crowdsourcing to tap human skills for analyzing images and video, classification, learning, and error recovery. The Cloud can also improve robots and automation systems by providing access to: a) datasets, publications, models, benchmarks, and simulation tools; b) open competitions for designs and systems; and c) open-source software. This survey includes over 150 references on results and open challenges. A website with new developments and updates is available at: http://goldberg.berkeley.edu/cloud-robotics/.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Federated Learning for Channel Estimation in Conventional and RIS-Assisted Massive MIMO Machine learning (ML) has attracted a great research interest for physical layer design problems, such as channel estimation, thanks to its low complexity and robustness. Channel estimation via ML requires model training on a dataset, which usually includes the received pilot signals as input and channel data as output. In previous works, model training is mostly done via centralized learning (CL), where the whole training dataset is collected from the users at the base station (BS). This approach introduces huge communication overhead for data collection. In this paper, to address this challenge, we propose a federated learning (FL) framework for channel estimation. We design a convolutional neural network (CNN) trained on the local datasets of the users without sending them to the BS. We develop FL-based channel estimation schemes for both conventional and RIS (intelligent reflecting surface) assisted massive MIMO (multiple-input multiple-output) systems, where a single CNN is trained for two different datasets for both scenarios. We evaluate the performance for noisy and quantized model transmission and show that the proposed approach provides approximately 16 times lower overhead than CL, while maintaining satisfactory performance close to CL. Furthermore, the proposed architecture exhibits lower estimation error than the state-of-the-art ML-based schemes.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
DMM: fast map matching for cellular data Map matching for cellular data is to transform a sequence of cell tower locations to a trajectory on a road map. It is an essential processing step for many applications, such as traffic optimization and human mobility analysis. However, most current map matching approaches are based on Hidden Markov Models (HMMs) that have heavy computation overhead to consider high-order cell tower information. This paper presents a fast map matching framework for cellular data, named as DMM, which adopts a recurrent neural network (RNN) to identify the most-likely trajectory of roads given a sequence of cell towers. Once the RNN model is trained, it can process cell tower sequences as making RNN inference, resulting in fast map matching speed. To transform DMM into a practical system, several challenges are addressed by developing a set of techniques, including spatial-aware representation of input cell tower sequences, an encoder-decoder framework for map matching model with variable-length input and output, and a reinforcement learning based model for optimizing the matched outputs. Extensive experiments on a large-scale anonymized cellular dataset reveal that DMM provides high map matching accuracy (precision 80.43% and recall 85.42%) and reduces the average inference time of HMM-based approaches by 46.58×.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001869
0.001148
0.000765
0.000588
0.000489
0.000359
0.000276
0.000162
0.000082
0.000059
0.00005
0.000044
0.000043
Practical Issues in Temporal Difference Learning This paper examines whether temporal difference methods for training connectionist networks, such as Sutton's TD(λ) algorithm, can be successfully applied to complex real-world problems. A number of important practical issues are identified and discussed from a general theoretical perspective. These practical issues are then examined in the context of a case study in which TD(λ) is applied to learning the game of backgammon from the outcome of self-play. This is apparently the first application of this algorithm to a complex non-trivial task. It is found that, with zero knowledge built in, the network is able to learn from scratch to play the entire game at a fairly strong intermediate level of performance, which is clearly better than conventional commercial programs, and which in fact surpasses comparable networks trained on a massive human expert data set. This indicates that TD learning may work better in practice than one would expect based on current theory, and it suggests that further analysis of TD methods, as well as applications in other complex domains, may be worth investigating.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
DMM: fast map matching for cellular data Map matching for cellular data is to transform a sequence of cell tower locations to a trajectory on a road map. It is an essential processing step for many applications, such as traffic optimization and human mobility analysis. However, most current map matching approaches are based on Hidden Markov Models (HMMs) that have heavy computation overhead to consider high-order cell tower information. This paper presents a fast map matching framework for cellular data, named as DMM, which adopts a recurrent neural network (RNN) to identify the most-likely trajectory of roads given a sequence of cell towers. Once the RNN model is trained, it can process cell tower sequences as making RNN inference, resulting in fast map matching speed. To transform DMM into a practical system, several challenges are addressed by developing a set of techniques, including spatial-aware representation of input cell tower sequences, an encoder-decoder framework for map matching model with variable-length input and output, and a reinforcement learning based model for optimizing the matched outputs. Extensive experiments on a large-scale anonymized cellular dataset reveal that DMM provides high map matching accuracy (precision 80.43% and recall 85.42%) and reduces the average inference time of HMM-based approaches by 46.58×.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001886
0.001159
0.000772
0.000594
0.000493
0.000363
0.000278
0.000164
0.000082
0.00006
0.00005
0.000045
0.000044
Temporal difference learning and TD-Gammon Ever since the days of Shannon's proposal for a chess-playing algorithm [12] and Samuel's checkers-learning program [10] the domain of complex board games such as Go, chess, checkers, Othello, and backgammon has been widely regarded as an ideal testing ground for exploring a variety of concepts and approaches in artificial intelligence and machine learning. Such board games offer the challenge of tremendous complexity and sophistication required to play at expert level. At the same time, the problem inputs and performance measures are clear-cut and well defined, and the game environment is readily automated in that it is easy to simulate the board, the rules of legal play, and the rules regarding when the game is over and determining the outcome.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Relay-Assisted Cooperative Federated Learning Federated learning (FL) has recently emerged as a promising technology to enable artificial intelligence (AI) at the network edge, where distributed mobile devices collaboratively train a shared AI model under the coordination of an edge server. To significantly improve the communication efficiency of FL, over-the-air computation allows a large number of mobile devices to concurrently upload their local models by exploiting the superposition property of wireless multi-access channels. Due to wireless channel fading, the model aggregation error at the edge server is dominated by the weakest channel among all devices, causing severe straggler issues. In this paper, we propose a relay-assisted cooperative FL scheme to effectively address the straggler issue. In particular, we deploy multiple half-duplex relays to cooperatively assist the devices in uploading the local model updates to the edge server. The nature of the over-the-air computation poses system objectives and constraints that are distinct from those in traditional relay communication systems. Moreover, the strong coupling between the design variables renders the optimization of such a system challenging. To tackle the issue, we propose an alternating-optimization-based algorithm to optimize the transceiver and relay operation with low complexity. Then, we analyze the model aggregation error in a single-relay case and show that our relay-assisted scheme achieves a smaller error than the one without relays provided that the relay transmit power and the relay channel gains are sufficiently large. The analysis provides critical insights on relay deployment in the implementation of cooperative FL. Extensive numerical results show that our design achieves faster convergence compared with state-of-the-art schemes.
DMM: fast map matching for cellular data Map matching for cellular data is to transform a sequence of cell tower locations to a trajectory on a road map. It is an essential processing step for many applications, such as traffic optimization and human mobility analysis. However, most current map matching approaches are based on Hidden Markov Models (HMMs) that have heavy computation overhead to consider high-order cell tower information. This paper presents a fast map matching framework for cellular data, named as DMM, which adopts a recurrent neural network (RNN) to identify the most-likely trajectory of roads given a sequence of cell towers. Once the RNN model is trained, it can process cell tower sequences as making RNN inference, resulting in fast map matching speed. To transform DMM into a practical system, several challenges are addressed by developing a set of techniques, including spatial-aware representation of input cell tower sequences, an encoder-decoder framework for map matching model with variable-length input and output, and a reinforcement learning based model for optimizing the matched outputs. Extensive experiments on a large-scale anonymized cellular dataset reveal that DMM provides high map matching accuracy (precision 80.43% and recall 85.42%) and reduces the average inference time of HMM-based approaches by 46.58×.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001887
0.00116
0.000772
0.000594
0.000494
0.000363
0.000279
0.000164
0.000082
0.00006
0.00005
0.000045
0.000044
Breaking Spectrum Gridlock With Cognitive Radios: An Information Theoretic Perspective Cognitive radios hold tremendous promise for increasing spectral efficiency in wireless systems. This paper surveys the fundamental capacity limits and associated transmission techniques for different wireless network design paradigms based on this promising technology. These paradigms are unified by the definition of a cognitive radio as an intelligent wireless communication device that exploits ...
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
DMM: fast map matching for cellular data Map matching for cellular data is to transform a sequence of cell tower locations to a trajectory on a road map. It is an essential processing step for many applications, such as traffic optimization and human mobility analysis. However, most current map matching approaches are based on Hidden Markov Models (HMMs) that have heavy computation overhead to consider high-order cell tower information. This paper presents a fast map matching framework for cellular data, named as DMM, which adopts a recurrent neural network (RNN) to identify the most-likely trajectory of roads given a sequence of cell towers. Once the RNN model is trained, it can process cell tower sequences as making RNN inference, resulting in fast map matching speed. To transform DMM into a practical system, several challenges are addressed by developing a set of techniques, including spatial-aware representation of input cell tower sequences, an encoder-decoder framework for map matching model with variable-length input and output, and a reinforcement learning based model for optimizing the matched outputs. Extensive experiments on a large-scale anonymized cellular dataset reveal that DMM provides high map matching accuracy (precision 80.43% and recall 85.42%) and reduces the average inference time of HMM-based approaches by 46.58×.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001963
0.001206
0.000803
0.000618
0.000513
0.000378
0.00029
0.00017
0.000086
0.000062
0.000052
0.000046
0.000045
W4: Real-Time Surveillance of People and Their Activities $W^4$ is a real time visual surveillance system for detecting and tracking multiple people and monitoring their activities in an outdoor environment. It operates on monocular gray-scale video imagery, or on video imagery from an infrared camera. $W^4$ employs a combination of shape analysis and tracking to locate people and their parts (head, hands, feet, torso) and to create models of people's appearance so that they can be tracked through interactions such as occlusions. It can determine whether a foreground region contains multiple people and can segment the region into its constituent people and track them. $W^4$ can also determine whether people are carrying objects, and can segment objects from their silhouettes, and construct appearance models for them so they can be identified in subsequent frames. $W^4$ can recognize events between people and objects, such as depositing an object, exchanging bags, or removing an object. It runs at 25 Hz for 320$\times$240 resolution images on a 400 Mhz dual-Pentium II PC.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
The wire-tap channel We consider the situation in which digital data is to be reliably transmitted over a discrete, memoryless channel (dmc) that is subjected to a wire-tap at the receiver. We assume that the wire-tapper views the channel output via a second dmc). Encoding by the transmitter and decoding by the receiver are permitted. However, the code books used in these operations are assumed to be known by the wire-tapper. The designer attempts to build the encoder-decoder in such a way as to maximize the transmission rate R, and the equivocation d of the data as seen by the wire-tapper. In this paper, we find the trade-off curve between R and d, assuming essentially perfect (“error-free”) transmission. In particular, if d is equal to Hs, the entropy of the data source, then we consider that the transmission is accomplished in perfect secrecy. Our results imply that there exists a Cs > 0, such that reliable transmission at rates up to Cs is possible in approximately perfect secrecy.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
A survey on vehicular cloud computing Vehicular networking has become a significant research area due to its specific features and applications such as standardization, efficient traffic management, road safety and infotainment. Vehicles are expected to carry relatively more communication systems, on board computing facilities, storage and increased sensing power. Hence, several technologies have been deployed to maintain and promote Intelligent Transportation Systems (ITS). Recently, a number of solutions were proposed to address the challenges and issues of vehicular networks. Vehicular Cloud Computing (VCC) is one of the solutions. VCC is a new hybrid technology that has a remarkable impact on traffic management and road safety by instantly using vehicular resources, such as computing, storage and internet for decision making. This paper presents the state-of-the-art survey of vehicular cloud computing. Moreover, we present a taxonomy for vehicular cloud in which special attention has been devoted to the extensive applications, cloud formations, key management, inter cloud communication systems, and broad aspects of privacy and security issues. Through an extensive review of the literature, we design an architecture for VCC, itemize the properties required in vehicular cloud that support this model. We compare this mechanism with normal Cloud Computing (CC) and discuss open research issues and future directions. By reviewing and analyzing literature, we found that VCC is a technologically feasible and economically viable technological shifting paradigm for converging intelligent vehicular networks towards autonomous traffic, vehicle control and perception systems.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
DMM: fast map matching for cellular data Map matching for cellular data is to transform a sequence of cell tower locations to a trajectory on a road map. It is an essential processing step for many applications, such as traffic optimization and human mobility analysis. However, most current map matching approaches are based on Hidden Markov Models (HMMs) that have heavy computation overhead to consider high-order cell tower information. This paper presents a fast map matching framework for cellular data, named as DMM, which adopts a recurrent neural network (RNN) to identify the most-likely trajectory of roads given a sequence of cell towers. Once the RNN model is trained, it can process cell tower sequences as making RNN inference, resulting in fast map matching speed. To transform DMM into a practical system, several challenges are addressed by developing a set of techniques, including spatial-aware representation of input cell tower sequences, an encoder-decoder framework for map matching model with variable-length input and output, and a reinforcement learning based model for optimizing the matched outputs. Extensive experiments on a large-scale anonymized cellular dataset reveal that DMM provides high map matching accuracy (precision 80.43% and recall 85.42%) and reduces the average inference time of HMM-based approaches by 46.58×.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.002135
0.001312
0.000874
0.000672
0.000559
0.000411
0.000315
0.000185
0.000093
0.000068
0.000057
0.000051
0.000049
Overview and Evaluation of Bluetooth Low Energy: An Emerging Low-Power Wireless Technology Bluetooth Low Energy (BLE) is an emerging low-power wireless technology developed for short-range control and monitoring applications that is expected to be incorporated into billions of devices in the next few years. This paper describes the main features of BLE, explores its potential applications, and investigates the impact of various critical parameters on its performance. BLE represents a trade-off between energy consumption, latency, piconet size, and throughput that mainly depends on parameters such as connInterval and connSlaveLatency. According to theoretical results, the lifetime of a BLE device powered by a coin cell battery ranges between 2.0 days and 14.1 years. The number of simultaneous slaves per master ranges between 2 and 5,917. The minimum latency for a master to obtain a sensor reading is 676 mu s, although simulation results show that, under high bit error rate, average latency increases by up to three orders of magnitude. The paper provides experimental results that complement the theoretical and simulation findings, and indicates implementation constraints that may reduce BLE performance.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Relay-Assisted Cooperative Federated Learning Federated learning (FL) has recently emerged as a promising technology to enable artificial intelligence (AI) at the network edge, where distributed mobile devices collaboratively train a shared AI model under the coordination of an edge server. To significantly improve the communication efficiency of FL, over-the-air computation allows a large number of mobile devices to concurrently upload their local models by exploiting the superposition property of wireless multi-access channels. Due to wireless channel fading, the model aggregation error at the edge server is dominated by the weakest channel among all devices, causing severe straggler issues. In this paper, we propose a relay-assisted cooperative FL scheme to effectively address the straggler issue. In particular, we deploy multiple half-duplex relays to cooperatively assist the devices in uploading the local model updates to the edge server. The nature of the over-the-air computation poses system objectives and constraints that are distinct from those in traditional relay communication systems. Moreover, the strong coupling between the design variables renders the optimization of such a system challenging. To tackle the issue, we propose an alternating-optimization-based algorithm to optimize the transceiver and relay operation with low complexity. Then, we analyze the model aggregation error in a single-relay case and show that our relay-assisted scheme achieves a smaller error than the one without relays provided that the relay transmit power and the relay channel gains are sufficiently large. The analysis provides critical insights on relay deployment in the implementation of cooperative FL. Extensive numerical results show that our design achieves faster convergence compared with state-of-the-art schemes.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
Wireless sensor network survey A wireless sensor network (WSN) has important applications such as remote environmental monitoring and target tracking. This has been enabled by the availability, particularly in recent years, of sensors that are smaller, cheaper, and intelligent. These sensors are equipped with wireless interfaces with which they can communicate with one another to form a network. The design of a WSN depends significantly on the application, and it must consider factors such as the environment, the application's design objectives, cost, hardware, and system constraints. The goal of our survey is to present a comprehensive review of the recent literature since the publication of [I.F. Akyildiz, W. Su, Y. Sankarasubramaniam, E. Cayirci, A survey on sensor networks, IEEE Communications Magazine, 2002]. Following a top-down approach, we give an overview of several new applications and then review the literature on various aspects of WSNs. We classify the problems into three different categories: (1) internal platform and underlying operating system, (2) communication protocol stack, and (3) network services, provisioning, and deployment. We review the major development in these three categories and outline new challenges.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
DMM: fast map matching for cellular data Map matching for cellular data is to transform a sequence of cell tower locations to a trajectory on a road map. It is an essential processing step for many applications, such as traffic optimization and human mobility analysis. However, most current map matching approaches are based on Hidden Markov Models (HMMs) that have heavy computation overhead to consider high-order cell tower information. This paper presents a fast map matching framework for cellular data, named as DMM, which adopts a recurrent neural network (RNN) to identify the most-likely trajectory of roads given a sequence of cell towers. Once the RNN model is trained, it can process cell tower sequences as making RNN inference, resulting in fast map matching speed. To transform DMM into a practical system, several challenges are addressed by developing a set of techniques, including spatial-aware representation of input cell tower sequences, an encoder-decoder framework for map matching model with variable-length input and output, and a reinforcement learning based model for optimizing the matched outputs. Extensive experiments on a large-scale anonymized cellular dataset reveal that DMM provides high map matching accuracy (precision 80.43% and recall 85.42%) and reduces the average inference time of HMM-based approaches by 46.58×.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001892
0.001163
0.000774
0.000596
0.000495
0.000364
0.000279
0.000164
0.000083
0.00006
0.00005
0.000045
0.000044
A survey on sensor networks The advancement in wireless communications and electronics has enabled the development of low-cost sensor networks. The sensor networks can be used for various application areas (e.g., health, military, home). For different application areas, there are different technical issues that researchers are currently resolving. The current state of the art of sensor networks is captured in this article, where solutions are discussed under their related protocol stack layer sections. This article also points out the open research issues and intends to spark new interests and developments in this field.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces A new heuristic approach for minimizing possiblynonlinear and non-differentiable continuous spacefunctions is presented. By means of an extensivetestbed it is demonstrated that the new methodconverges faster and with more certainty than manyother acclaimed global optimization methods. The newmethod requires few control variables, is robust, easyto use, and lends itself very well to parallelcomputation.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001851
0.001138
0.000758
0.000583
0.000484
0.000356
0.000273
0.000161
0.000081
0.000059
0.000049
0.000044
0.000043
Long short-term memory. Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Comparison of Orthogonal vs. Union of Subspace Based Pilots for Multi-Cell Massive MIMO Systems In this paper, we analytically compare orthogonal pilot reuse (OPR) with union of subspace based pilots in terms of channel estimation error and achievable throughput. In OPR, due to the repetition of the same pilot sequences across all cells, inter-cell interference (ICI) leads to pilot contamination, which can severely degrade the performance of cell-edge users. In our proposed union of subspace based method of pilot sequence design, pilots of adjacent cells belong to distinct sets of orthonormal bases. Therefore, each user experiences a lower level of ICI, but from all users of neighboring cells. However, when the pilots are chosen from mutually unbiased orthonormal bases (MUOB), the ICI power scales down exactly as the inverse of the pilot length, leading to low ICI. Further, as the number of users increases, it may no longer be feasible to allot orthogonal pilots to all users within a cell. We find that, with limited number of pilot sequences, MUOB is significantly more resilient to intra-cell interference, yielding better channel estimates compared to OPR. On the other hand, when the pilot length is larger than the number of users, while OPR achieves channel estimates with very high accuracy for some of the users, MUOB is able to provide a more uniform quality of channel estimation across all users in the cell. We evaluate the fairness of OPR vis-à-vis MUOB using the Jain's fairness metric and max-min index. Via numerical simulations, we observe that the average fairness as well as convergence rates of utility metrics measured using MUOB pilots outperform the conventional OPR scheme.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.002006
0.001233
0.000821
0.000631
0.000525
0.000386
0.000296
0.000174
0.000088
0.000063
0.000053
0.000047
0.000046
Sequence to Sequence Learning with Neural Networks. Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Comparison of Orthogonal vs. Union of Subspace Based Pilots for Multi-Cell Massive MIMO Systems In this paper, we analytically compare orthogonal pilot reuse (OPR) with union of subspace based pilots in terms of channel estimation error and achievable throughput. In OPR, due to the repetition of the same pilot sequences across all cells, inter-cell interference (ICI) leads to pilot contamination, which can severely degrade the performance of cell-edge users. In our proposed union of subspace based method of pilot sequence design, pilots of adjacent cells belong to distinct sets of orthonormal bases. Therefore, each user experiences a lower level of ICI, but from all users of neighboring cells. However, when the pilots are chosen from mutually unbiased orthonormal bases (MUOB), the ICI power scales down exactly as the inverse of the pilot length, leading to low ICI. Further, as the number of users increases, it may no longer be feasible to allot orthogonal pilots to all users within a cell. We find that, with limited number of pilot sequences, MUOB is significantly more resilient to intra-cell interference, yielding better channel estimates compared to OPR. On the other hand, when the pilot length is larger than the number of users, while OPR achieves channel estimates with very high accuracy for some of the users, MUOB is able to provide a more uniform quality of channel estimation across all users in the cell. We evaluate the fairness of OPR vis-à-vis MUOB using the Jain's fairness metric and max-min index. Via numerical simulations, we observe that the average fairness as well as convergence rates of utility metrics measured using MUOB pilots outperform the conventional OPR scheme.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001931
0.001187
0.00079
0.000608
0.000505
0.000371
0.000285
0.000168
0.000084
0.000061
0.000051
0.000046
0.000045
Long-term Recurrent Convolutional Networks for Visual Recognition and Description. Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent are effective for tasks involving sequences, visual and otherwise. We describe a class of recurrent convolutional architectures which is end-to-end trainable and suitable for large-scale visual understanding tasks, and demonstrate the value of these models for activity recognition, image captioning, and video description. In contrast to previous models which assume a fixed visual representation or perform simple temporal averaging for sequential processing, recurrent convolutional models are “doubly deep” in that they learn compositional representations in space and time. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Differentiable recurrent models are appealing in that they can directly map variable-length inputs (e.g., videos) to variable-length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent sequence models are directly connected to modern visual convolutional network models and can be jointly trained to learn temporal dynamics and convolutional perceptual representations. Our results show that such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined or optimized.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Non-Strict Cache Coherence: Exploiting Data-Race Tolerance in Emerging Applications Software distributed shared memory (DSM) platforms on networks of workstations tolerate large network latencies by employing one of several weak memory consistency models. Data-race tolerant applications, such as Genetic Algorithms (GAs), Probabilistic Inference, etc., offer an additional degree of freedom to tolerate network latency: they do not synchronize shared memory references, and behave correctly when supplied outdated shared data. However, these algorithms often have a high communication-to-computation ratio and can flood the network with messages in the presence of large message delays. We study the performance of controlled asynchronous implementations of these algorithms via the use of our previously proposed blocking Global Read memory access primitive. Global Read implements non-strict cache coherence by guaranteeing to return to the reader a shared datum value from within a specified staleness range. Experiments on an IBM SP2 multicomputer with an Ethernet show significant performance improvements for controlled asynchronous implementations. On a lightly loaded Ethernet network, most of the GA benchmarks see 30% to 40% improvement over the best competitor for 2 to 16 processors, while two of the Probabilistic Inference benchmarks see more than 80% improvement for two processors. As the network, load in-creases, the benefits of non-strict cache coherence increase significantly.
A Heuristic Model For Dynamic Flexible Job Shop Scheduling Problem Considering Variable Processing Times In real scheduling problems, unexpected changes may occur frequently such as changes in task features. These changes cause deviation from primary scheduling. In this article, a heuristic model, inspired from Artificial Bee Colony algorithm, is proposed for a dynamic flexible job-shop scheduling (DFJSP) problem. This problem consists of n jobs that should be processed by m machines and the processing time of jobs deviates from estimated times. The objective is near-optimal scheduling after any change in tasks in order to minimise the maximal completion time (Makespan). In the proposed model, first, scheduling is done according to the estimated processing times and then re-scheduling is performed after determining the exact ones considering machine set-up. In order to evaluate the performance of the proposed model, some numerical experiments are designed in small, medium and large sizes in different levels of changes in processing times and statistical results illustrate the efficiency of the proposed algorithm.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.002528
0.001553
0.001034
0.000796
0.000661
0.000486
0.000373
0.000219
0.00011
0.00008
0.000067
0.00006
0.000058
Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr9k, Flickr30k and MS COCO.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
A Heuristic Model For Dynamic Flexible Job Shop Scheduling Problem Considering Variable Processing Times In real scheduling problems, unexpected changes may occur frequently such as changes in task features. These changes cause deviation from primary scheduling. In this article, a heuristic model, inspired from Artificial Bee Colony algorithm, is proposed for a dynamic flexible job-shop scheduling (DFJSP) problem. This problem consists of n jobs that should be processed by m machines and the processing time of jobs deviates from estimated times. The objective is near-optimal scheduling after any change in tasks in order to minimise the maximal completion time (Makespan). In the proposed model, first, scheduling is done according to the estimated processing times and then re-scheduling is performed after determining the exact ones considering machine set-up. In order to evaluate the performance of the proposed model, some numerical experiments are designed in small, medium and large sizes in different levels of changes in processing times and statistical results illustrate the efficiency of the proposed algorithm.
DMM: fast map matching for cellular data Map matching for cellular data is to transform a sequence of cell tower locations to a trajectory on a road map. It is an essential processing step for many applications, such as traffic optimization and human mobility analysis. However, most current map matching approaches are based on Hidden Markov Models (HMMs) that have heavy computation overhead to consider high-order cell tower information. This paper presents a fast map matching framework for cellular data, named as DMM, which adopts a recurrent neural network (RNN) to identify the most-likely trajectory of roads given a sequence of cell towers. Once the RNN model is trained, it can process cell tower sequences as making RNN inference, resulting in fast map matching speed. To transform DMM into a practical system, several challenges are addressed by developing a set of techniques, including spatial-aware representation of input cell tower sequences, an encoder-decoder framework for map matching model with variable-length input and output, and a reinforcement learning based model for optimizing the matched outputs. Extensive experiments on a large-scale anonymized cellular dataset reveal that DMM provides high map matching accuracy (precision 80.43% and recall 85.42%) and reduces the average inference time of HMM-based approaches by 46.58×.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.002255
0.001386
0.000923
0.00071
0.00059
0.000434
0.000333
0.000196
0.000098
0.000071
0.00006
0.000053
0.000052
ImageNet Large Scale Visual Recognition Challenge. The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Relay-Assisted Cooperative Federated Learning Federated learning (FL) has recently emerged as a promising technology to enable artificial intelligence (AI) at the network edge, where distributed mobile devices collaboratively train a shared AI model under the coordination of an edge server. To significantly improve the communication efficiency of FL, over-the-air computation allows a large number of mobile devices to concurrently upload their local models by exploiting the superposition property of wireless multi-access channels. Due to wireless channel fading, the model aggregation error at the edge server is dominated by the weakest channel among all devices, causing severe straggler issues. In this paper, we propose a relay-assisted cooperative FL scheme to effectively address the straggler issue. In particular, we deploy multiple half-duplex relays to cooperatively assist the devices in uploading the local model updates to the edge server. The nature of the over-the-air computation poses system objectives and constraints that are distinct from those in traditional relay communication systems. Moreover, the strong coupling between the design variables renders the optimization of such a system challenging. To tackle the issue, we propose an alternating-optimization-based algorithm to optimize the transceiver and relay operation with low complexity. Then, we analyze the model aggregation error in a single-relay case and show that our relay-assisted scheme achieves a smaller error than the one without relays provided that the relay transmit power and the relay channel gains are sufficiently large. The analysis provides critical insights on relay deployment in the implementation of cooperative FL. Extensive numerical results show that our design achieves faster convergence compared with state-of-the-art schemes.
Predicting Node failure in cloud service systems. In recent years, many traditional software systems have migrated to cloud computing platforms and are provided as online services. The service quality matters because system failures could seriously affect business and user experience. A cloud service system typically contains a large number of computing nodes. In reality, nodes may fail and affect service availability. In this paper, we propose a failure prediction technique, which can predict the failure-proneness of a node in a cloud service system based on historical data, before node failure actually happens. The ability to predict faulty nodes enables the allocation and migration of virtual machines to the healthy nodes, therefore improving service availability. Predicting node failure in cloud service systems is challenging, because a node failure could be caused by a variety of reasons and reflected by many temporal and spatial signals. Furthermore, the failure data is highly imbalanced. To tackle these challenges, we propose MING, a novel technique that combines: 1) a LSTM model to incorporate the temporal data, 2) a Random Forest model to incorporate spatial data; 3) a ranking model that embeds the intermediate results of the two models as feature inputs and ranks the nodes by their failure-proneness, 4) a cost-sensitive function to identify the optimal threshold for selecting the faulty nodes. We evaluate our approach using real-world data collected from a cloud service system. The results confirm the effectiveness of the proposed approach. We have also successfully applied the proposed approach in real industrial practice.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.002074
0.001275
0.000849
0.000653
0.000543
0.000399
0.000306
0.00018
0.000091
0.000066
0.000055
0.000049
0.000048
Activity Recognition from Accelerometer Data on a Mobile Phone Real-time monitoring of human movements can be easily envisaged as a useful tool for many purposes and future applications. This paper presents the implementation of a real-time classification system for some basic human movements using a conventional mobile phone equipped with an accelerometer. The aim of this study was to check the present capacity of conventional mobile phones to execute in real-time all the necessary pattern recognition algorithms to classify the corresponding human movements. No server processing data is involved in this approach, so the human monitoring is completely decentralized and only an additional software will be required to remotely report the human monitoring. The feasibility of this approach opens a new range of opportunities to develop new applications at a reasonable low-cost.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
Machine learning methods for classifying human physical activity from on-body accelerometers. The use of on-body wearable sensors is widespread in several academic and industrial domains. Of great interest are their applications in ambulatory monitoring and pervasive computing systems; here, some quantitative analysis of human motion and its automatic classification are the main computational tasks to be pursued. In this paper, we discuss how human physical activity can be classified using on-body accelerometers, with a major emphasis devoted to the computational algorithms employed for this purpose. In particular, we motivate our current interest for classifiers based on Hidden Markov Models (HMMs). An example is illustrated and discussed by analysing a dataset of accelerometer time series.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001945
0.001195
0.000796
0.000612
0.000509
0.000374
0.000287
0.000169
0.000085
0.000062
0.000052
0.000046
0.000045
Activity recognition using cell phone accelerometers Mobile devices are becoming increasingly sophisticated and the latest generation of smart cell phones now incorporates many diverse and powerful sensors. These sensors include GPS sensors, vision sensors (i.e., cameras), audio sensors (i.e., microphones), light sensors, temperature sensors, direction sensors (i.e., magnetic compasses), and acceleration sensors (i.e., accelerometers). The availability of these sensors in mass-marketed communication devices creates exciting new opportunities for data mining and data mining applications. In this paper we describe and evaluate a system that uses phone-based accelerometers to perform activity recognition, a task which involves identifying the physical activity a user is performing. To implement our system we collected labeled accelerometer data from twenty-nine users as they performed daily activities such as walking, jogging, climbing stairs, sitting, and standing, and then aggregated this time series data into examples that summarize the user activity over 10- second intervals. We then used the resulting training data to induce a predictive model for activity recognition. This work is significant because the activity recognition model permits us to gain useful knowledge about the habits of millions of users passively---just by having them carry cell phones in their pockets. Our work has a wide range of applications, including automatic customization of the mobile device's behavior based upon a user's activity (e.g., sending calls directly to voicemail if a user is jogging) and generating a daily/weekly activity profile to determine if a user (perhaps an obese child) is performing a healthy amount of exercise.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
A Heuristic Model For Dynamic Flexible Job Shop Scheduling Problem Considering Variable Processing Times In real scheduling problems, unexpected changes may occur frequently such as changes in task features. These changes cause deviation from primary scheduling. In this article, a heuristic model, inspired from Artificial Bee Colony algorithm, is proposed for a dynamic flexible job-shop scheduling (DFJSP) problem. This problem consists of n jobs that should be processed by m machines and the processing time of jobs deviates from estimated times. The objective is near-optimal scheduling after any change in tasks in order to minimise the maximal completion time (Makespan). In the proposed model, first, scheduling is done according to the estimated processing times and then re-scheduling is performed after determining the exact ones considering machine set-up. In order to evaluate the performance of the proposed model, some numerical experiments are designed in small, medium and large sizes in different levels of changes in processing times and statistical results illustrate the efficiency of the proposed algorithm.
Predicting Node failure in cloud service systems. In recent years, many traditional software systems have migrated to cloud computing platforms and are provided as online services. The service quality matters because system failures could seriously affect business and user experience. A cloud service system typically contains a large number of computing nodes. In reality, nodes may fail and affect service availability. In this paper, we propose a failure prediction technique, which can predict the failure-proneness of a node in a cloud service system based on historical data, before node failure actually happens. The ability to predict faulty nodes enables the allocation and migration of virtual machines to the healthy nodes, therefore improving service availability. Predicting node failure in cloud service systems is challenging, because a node failure could be caused by a variety of reasons and reflected by many temporal and spatial signals. Furthermore, the failure data is highly imbalanced. To tackle these challenges, we propose MING, a novel technique that combines: 1) a LSTM model to incorporate the temporal data, 2) a Random Forest model to incorporate spatial data; 3) a ranking model that embeds the intermediate results of the two models as feature inputs and ranks the nodes by their failure-proneness, 4) a cost-sensitive function to identify the optimal threshold for selecting the faulty nodes. We evaluate our approach using real-world data collected from a cloud service system. The results confirm the effectiveness of the proposed approach. We have also successfully applied the proposed approach in real industrial practice.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.00231
0.00142
0.000945
0.000727
0.000604
0.000444
0.000341
0.000201
0.000101
0.000073
0.000062
0.000055
0.000053
A Survey on Human Activity Recognition using Wearable Sensors Providing accurate and opportune information on people's activities and behaviors is one of the most important tasks in pervasive computing. Innumerable applications can be visualized, for instance, in medical, security, entertainment, and tactical scenarios. Despite human activity recognition (HAR) being an active field for more than a decade, there are still key aspects that, if addressed, would constitute a significant turn in the way people interact with mobile devices. This paper surveys the state of the art in HAR based on wearable sensors. A general architecture is first presented along with a description of the main components of any HAR system. We also propose a two-level taxonomy in accordance to the learning approach (either supervised or semi-supervised) and the response time (either offline or online). Then, the principal issues and challenges are discussed, as well as the main solutions to each one of them. Twenty eight systems are qualitatively evaluated in terms of recognition performance, energy consumption, obtrusiveness, and flexibility, among others. Finally, we present some open problems and ideas that, due to their high relevance, should be addressed in future research.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.002015
0.001238
0.000825
0.000634
0.000527
0.000388
0.000297
0.000175
0.000088
0.000064
0.000054
0.000048
0.000047
Extreme learning machine: Theory and applications. It is clear that the learning speed of feedforward neural networks is in general far slower than required and it has been a major bottleneck in their applications for past decades. Two key reasons behind may be: (1) the slow gradient-based learning algorithms are extensively used to train neural networks, and (2) all the parameters of the networks are tuned iteratively by using such learning algorithms. Unlike these conventional implementations, this paper proposes a new learning algorithm called extreme learning machine (ELM) for single-hidden layer feedforward neural networks (SLFNs) which randomly chooses hidden nodes and analytically determines the output weights of SLFNs. In theory, this algorithm tends to provide good generalization performance at extremely fast learning speed. The experimental results based on a few artificial and real benchmark function approximation and classification problems including very large complex applications show that the new algorithm can produce good generalization performance in most cases and can learn thousands of times faster than conventional popular learning algorithms for feedforward neural networks.11For the preliminary idea of the ELM algorithm, refer to “Extreme Learning Machine: A New Learning Scheme of Feedforward Neural Networks”, Proceedings of International Joint Conference on Neural Networks (IJCNN2004), Budapest, Hungary, 25–29 July, 2004.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
On how pachycondyla apicalis ants suggest a new search algorithm In this paper we present a new optimization algorithm based on a model of the foraging behavior of a population of primitive ants ( Pachycondyla apicalis ). These ants are characterized by a relatively simple but efficient strategy for prey search in which individuals hunt alone and try to cover a given area around their nest. The ant colony search behavior consists of a set of parallel local searches on hunting sites with a sensitivity to successful sites. Also, their nest is periodically moved. Accordingly, the proposed algorithm performs parallel random searches in the neighborhood of points called hunting sites. Hunting sites are created in the neighborhood of a point called nest. At constant intervals of time the nest is moved, which corresponds to a restart operator which re-initializes the parallel searches. We have applied this algorithm, called API, to numerical optimization problems with encouraging results.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001826
0.001123
0.000748
0.000575
0.000478
0.000351
0.00027
0.000159
0.00008
0.000058
0.000049
0.000043
0.000042
An Automatic Method For Finding The Greatest Or Least Value Of A Function
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
A latent space-based estimation of distribution algorithm for large-scale global optimization Large-scale global optimization problems (LSGOs) have received considerable attention in the field of meta-heuristic algorithms. Estimation of distribution algorithms (EDAs) are a major branch of meta-heuristic algorithms. However, how to effectively build the probabilistic model for EDA in high dimensions is confronted with obstacle, making them less attractive due to the large computational requirements. To overcome the shortcomings of EDAs, this paper proposed a latent space-based EDA (LS-EDA), which transforms the multivariate probabilistic model of Gaussian-based EDA into its principal component latent subspace with lower dimensionality. LS-EDA can efficiently reduce the complexity of EDA while maintaining its probability model without losing key information to scale up its performance for LSGOs. When the original dimensions are projected to the latent subspace, those dimensions with larger projected value make more contribution to the optimization process. LS-EDA can also help recognize and understand the problem structure, especially for black-box optimization problems. Due to dimensionality reduction, its computational budget and population size can be effectively reduced while its performance is highly competitive in comparison with the state-of-the-art meta-heuristic algorithms for LSGOs. In order to understand the strengths and weaknesses of LS-EDA, we have carried out extensive computational studies. Our results revealed LS-EDA outperforms the others on the benchmark functions with overlap and nonseparate variables.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
Evolution strategies –A comprehensive introduction This article gives a comprehensive introduction into one of the main branches of evolutionary computation – the evolution strategies (ES) the history of which dates back to the 1960s in Germany. Starting from a survey of history the philosophical background is explained in order to make understandable why ES are realized in the way they are. Basic ES algorithms and design principles for variation and selection operators as well as theoretical issues are presented, and future branches of ES research are discussed.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Federated Learning for Channel Estimation in Conventional and RIS-Assisted Massive MIMO Machine learning (ML) has attracted a great research interest for physical layer design problems, such as channel estimation, thanks to its low complexity and robustness. Channel estimation via ML requires model training on a dataset, which usually includes the received pilot signals as input and channel data as output. In previous works, model training is mostly done via centralized learning (CL), where the whole training dataset is collected from the users at the base station (BS). This approach introduces huge communication overhead for data collection. In this paper, to address this challenge, we propose a federated learning (FL) framework for channel estimation. We design a convolutional neural network (CNN) trained on the local datasets of the users without sending them to the BS. We develop FL-based channel estimation schemes for both conventional and RIS (intelligent reflecting surface) assisted massive MIMO (multiple-input multiple-output) systems, where a single CNN is trained for two different datasets for both scenarios. We evaluate the performance for noisy and quantized model transmission and show that the proposed approach provides approximately 16 times lower overhead than CL, while maintaining satisfactory performance close to CL. Furthermore, the proposed architecture exhibits lower estimation error than the state-of-the-art ML-based schemes.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
Comprehensive learning particle swarm optimizer for global optimization of multimodal functions This paper presents a variant of particle swarm optimizers (PSOs) that we call the comprehensive learning particle swarm optimizer (CLPSO), which uses a novel learning strategy whereby all other particles' historical best information is used to update a particle's velocity. This strategy enables the diversity of the swarm to be preserved to discourage premature convergence. Experiments were conducted (using codes available from http://www.ntu.edu.sg/home/epnsugan) on multimodal test functions such as Rosenbrock, Griewank, Rastrigin, Ackley, and Schwefel and composition functions both with and without coordinate rotation. The results demonstrate good performance of the CLPSO in solving multimodal problems when compared with eight other recent variants of the PSO.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
DMM: fast map matching for cellular data Map matching for cellular data is to transform a sequence of cell tower locations to a trajectory on a road map. It is an essential processing step for many applications, such as traffic optimization and human mobility analysis. However, most current map matching approaches are based on Hidden Markov Models (HMMs) that have heavy computation overhead to consider high-order cell tower information. This paper presents a fast map matching framework for cellular data, named as DMM, which adopts a recurrent neural network (RNN) to identify the most-likely trajectory of roads given a sequence of cell towers. Once the RNN model is trained, it can process cell tower sequences as making RNN inference, resulting in fast map matching speed. To transform DMM into a practical system, several challenges are addressed by developing a set of techniques, including spatial-aware representation of input cell tower sequences, an encoder-decoder framework for map matching model with variable-length input and output, and a reinforcement learning based model for optimizing the matched outputs. Extensive experiments on a large-scale anonymized cellular dataset reveal that DMM provides high map matching accuracy (precision 80.43% and recall 85.42%) and reduces the average inference time of HMM-based approaches by 46.58×.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001971
0.001212
0.000807
0.000621
0.000516
0.000379
0.000291
0.000171
0.000086
0.000062
0.000053
0.000047
0.000046
Two-Loop Real-Coded Genetic Algorithms with Adaptive Control of Mutation Step Sizes Genetic algorithms are adaptive methods based on natural evolution that may be used for search and optimization problems. They process a population of search space solutions with three operations: selection, crossover, and mutation. Under their initial formulation, the search space solutions are coded using the binary alphabet, however other coding types have been taken into account for the representation issue, such as real coding. The real-coding approach seems particularly natural when tackling optimization problems of parameters with variables in continuous domains.A problem in the use of genetic algorithms is premature convergence, a premature stagnation of the search caused by the lack of population diversity. The mutation operator is the one responsible for the generation of diversity and therefore may be considered to be an important element in solving this problem. For the case of working under real coding, a solution involves the control, throughout the run, of the strength in which real genes are mutated, i.e., the step size.This paper presents TRAMSS, a Two-loop Real-coded genetic algorithm with Adaptive control of Mutation Step Sizes. It adjusts the step size of a mutation operator applied during the inner loop, for producing efficient local tuning. It also controls the step size of a mutation operator used by a restart operator performed in the outer loop, for reinitializing the population in order to ensure that different promising search zones are focused by the inner loop throughout the run. Experimental results show that the proposal consistently outperforms other mechanisms presented for controlling mutation step sizes, offering two main advantages simultaneously, better reliability and accuracy.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
DMM: fast map matching for cellular data Map matching for cellular data is to transform a sequence of cell tower locations to a trajectory on a road map. It is an essential processing step for many applications, such as traffic optimization and human mobility analysis. However, most current map matching approaches are based on Hidden Markov Models (HMMs) that have heavy computation overhead to consider high-order cell tower information. This paper presents a fast map matching framework for cellular data, named as DMM, which adopts a recurrent neural network (RNN) to identify the most-likely trajectory of roads given a sequence of cell towers. Once the RNN model is trained, it can process cell tower sequences as making RNN inference, resulting in fast map matching speed. To transform DMM into a practical system, several challenges are addressed by developing a set of techniques, including spatial-aware representation of input cell tower sequences, an encoder-decoder framework for map matching model with variable-length input and output, and a reinforcement learning based model for optimizing the matched outputs. Extensive experiments on a large-scale anonymized cellular dataset reveal that DMM provides high map matching accuracy (precision 80.43% and recall 85.42%) and reduces the average inference time of HMM-based approaches by 46.58×.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001886
0.001159
0.000772
0.000594
0.000493
0.000363
0.000278
0.000164
0.000082
0.00006
0.00005
0.000045
0.000044
Memetic Algorithms for Continuous Optimisation Based on Local Search Chains Memetic algorithms with continuous local search methods have arisen as effective tools to address the difficulty of obtaining reliable solutions of high precision for complex continuous optimisation problems. There exists a group of continuous local search algorithms that stand out as exceptional local search optimisers. However, on some occasions, they may become very expensive, because of the way they exploit local information to guide the search process. In this paper, they are called intensive continuous local search methods. Given the potential of this type of local optimisation methods, it is interesting to build prospective memetic algorithm models with them. This paper presents the concept of local search chain as a springboard to design memetic algorithm approaches that can effectively use intense continuous local search methods as local search operators. Local search chain concerns the idea that, at one stage, the local search operator may continue the operation of a previous invocation, starting from the final configuration (initial solution, strategy parameter values, internal variables, etc.) reached by this one. The proposed memetic algorithm favours the formation of local search chains during the memetic algorithm run with the aim of concentrating local tuning in search regions showing promise. In order to study the performance of the new memetic algorithm model, an instance is implemented with CMA-ES as an intense local search method. The benefits of the proposal in comparison to other kinds of memetic algorithms and evolutionary algorithms proposed in the literature to deal with continuous optimisation problems are experimentally shown. Concretely, the empirical study reveals a clear superiority when tackling high-dimensional problems.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
A Heuristic Model For Dynamic Flexible Job Shop Scheduling Problem Considering Variable Processing Times In real scheduling problems, unexpected changes may occur frequently such as changes in task features. These changes cause deviation from primary scheduling. In this article, a heuristic model, inspired from Artificial Bee Colony algorithm, is proposed for a dynamic flexible job-shop scheduling (DFJSP) problem. This problem consists of n jobs that should be processed by m machines and the processing time of jobs deviates from estimated times. The objective is near-optimal scheduling after any change in tasks in order to minimise the maximal completion time (Makespan). In the proposed model, first, scheduling is done according to the estimated processing times and then re-scheduling is performed after determining the exact ones considering machine set-up. In order to evaluate the performance of the proposed model, some numerical experiments are designed in small, medium and large sizes in different levels of changes in processing times and statistical results illustrate the efficiency of the proposed algorithm.
Predicting Node failure in cloud service systems. In recent years, many traditional software systems have migrated to cloud computing platforms and are provided as online services. The service quality matters because system failures could seriously affect business and user experience. A cloud service system typically contains a large number of computing nodes. In reality, nodes may fail and affect service availability. In this paper, we propose a failure prediction technique, which can predict the failure-proneness of a node in a cloud service system based on historical data, before node failure actually happens. The ability to predict faulty nodes enables the allocation and migration of virtual machines to the healthy nodes, therefore improving service availability. Predicting node failure in cloud service systems is challenging, because a node failure could be caused by a variety of reasons and reflected by many temporal and spatial signals. Furthermore, the failure data is highly imbalanced. To tackle these challenges, we propose MING, a novel technique that combines: 1) a LSTM model to incorporate the temporal data, 2) a Random Forest model to incorporate spatial data; 3) a ranking model that embeds the intermediate results of the two models as feature inputs and ranks the nodes by their failure-proneness, 4) a cost-sensitive function to identify the optimal threshold for selecting the faulty nodes. We evaluate our approach using real-world data collected from a cloud service system. The results confirm the effectiveness of the proposed approach. We have also successfully applied the proposed approach in real industrial practice.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.002486
0.001528
0.001017
0.000783
0.00065
0.000478
0.000367
0.000216
0.000109
0.000079
0.000066
0.000059
0.000057
Tracking control for multi-agent consensus with an active leader and variable topology In this paper, we consider a multi-agent consensus problem with an active leader and variable interconnection topology. The state of the considered leader not only keeps changing but also may not be measured. To track such a leader, a neighbor-based local controller together with a neighbor-based state-estimation rule is given for each autonomous agent. Then we prove that, with the proposed control scheme, each agent can follow the leader if the (acceleration) input of the active leader is known, and the tracking error is estimated if the input of the leader is unknown.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
ImageNet Classification with Deep Convolutional Neural Networks. We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0%, respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Federated Learning for Channel Estimation in Conventional and RIS-Assisted Massive MIMO Machine learning (ML) has attracted a great research interest for physical layer design problems, such as channel estimation, thanks to its low complexity and robustness. Channel estimation via ML requires model training on a dataset, which usually includes the received pilot signals as input and channel data as output. In previous works, model training is mostly done via centralized learning (CL), where the whole training dataset is collected from the users at the base station (BS). This approach introduces huge communication overhead for data collection. In this paper, to address this challenge, we propose a federated learning (FL) framework for channel estimation. We design a convolutional neural network (CNN) trained on the local datasets of the users without sending them to the BS. We develop FL-based channel estimation schemes for both conventional and RIS (intelligent reflecting surface) assisted massive MIMO (multiple-input multiple-output) systems, where a single CNN is trained for two different datasets for both scenarios. We evaluate the performance for noisy and quantized model transmission and show that the proposed approach provides approximately 16 times lower overhead than CL, while maintaining satisfactory performance close to CL. Furthermore, the proposed architecture exhibits lower estimation error than the state-of-the-art ML-based schemes.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Modeling Long- and Short-Term Temporal Patterns with Deep Neural Networks. Multivariate time series forecasting is an important machine learning problem across many domains, including predictions of solar plant energy output, electricity consumption, and traffic jam situation. Temporal data arise in these real-world applications often involves a mixture of long-term and short-term patterns, for which traditional approaches such as Autoregressive models and Gaussian Process may fail. In this paper, we proposed a novel deep learning framework, namely Long- and Short-term Time-series network (LSTNet), to address this open challenge. LSTNet uses the Convolution Neural Network (CNN) and the Recurrent Neural Network (RNN) to extract short-term local dependency patterns among variables and to discover long-term patterns for time series trends. Furthermore, we leverage traditional autoregressive model to tackle the scale insensitive problem of the neural network model. In our evaluation on real-world data with complex mixtures of repetitive patterns, LSTNet achieved significant performance improvements over that of several state-of-the-art baseline methods. All the data and experiment codes are available online.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001873
0.001151
0.000767
0.00059
0.00049
0.00036
0.000277
0.000163
0.000082
0.000059
0.00005
0.000044
0.000043
Microsoft Coco: Common Objects In Context We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Relay-Assisted Cooperative Federated Learning Federated learning (FL) has recently emerged as a promising technology to enable artificial intelligence (AI) at the network edge, where distributed mobile devices collaboratively train a shared AI model under the coordination of an edge server. To significantly improve the communication efficiency of FL, over-the-air computation allows a large number of mobile devices to concurrently upload their local models by exploiting the superposition property of wireless multi-access channels. Due to wireless channel fading, the model aggregation error at the edge server is dominated by the weakest channel among all devices, causing severe straggler issues. In this paper, we propose a relay-assisted cooperative FL scheme to effectively address the straggler issue. In particular, we deploy multiple half-duplex relays to cooperatively assist the devices in uploading the local model updates to the edge server. The nature of the over-the-air computation poses system objectives and constraints that are distinct from those in traditional relay communication systems. Moreover, the strong coupling between the design variables renders the optimization of such a system challenging. To tackle the issue, we propose an alternating-optimization-based algorithm to optimize the transceiver and relay operation with low complexity. Then, we analyze the model aggregation error in a single-relay case and show that our relay-assisted scheme achieves a smaller error than the one without relays provided that the relay transmit power and the relay channel gains are sufficiently large. The analysis provides critical insights on relay deployment in the implementation of cooperative FL. Extensive numerical results show that our design achieves faster convergence compared with state-of-the-art schemes.
DMM: fast map matching for cellular data Map matching for cellular data is to transform a sequence of cell tower locations to a trajectory on a road map. It is an essential processing step for many applications, such as traffic optimization and human mobility analysis. However, most current map matching approaches are based on Hidden Markov Models (HMMs) that have heavy computation overhead to consider high-order cell tower information. This paper presents a fast map matching framework for cellular data, named as DMM, which adopts a recurrent neural network (RNN) to identify the most-likely trajectory of roads given a sequence of cell towers. Once the RNN model is trained, it can process cell tower sequences as making RNN inference, resulting in fast map matching speed. To transform DMM into a practical system, several challenges are addressed by developing a set of techniques, including spatial-aware representation of input cell tower sequences, an encoder-decoder framework for map matching model with variable-length input and output, and a reinforcement learning based model for optimizing the matched outputs. Extensive experiments on a large-scale anonymized cellular dataset reveal that DMM provides high map matching accuracy (precision 80.43% and recall 85.42%) and reduces the average inference time of HMM-based approaches by 46.58×.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001984
0.001219
0.000812
0.000625
0.000519
0.000382
0.000293
0.000172
0.000087
0.000063
0.000053
0.000047
0.000046
Imagenet: A Large-Scale Hierarchical Image Database The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called "ImageNet", a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 5001000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Relay-Assisted Cooperative Federated Learning Federated learning (FL) has recently emerged as a promising technology to enable artificial intelligence (AI) at the network edge, where distributed mobile devices collaboratively train a shared AI model under the coordination of an edge server. To significantly improve the communication efficiency of FL, over-the-air computation allows a large number of mobile devices to concurrently upload their local models by exploiting the superposition property of wireless multi-access channels. Due to wireless channel fading, the model aggregation error at the edge server is dominated by the weakest channel among all devices, causing severe straggler issues. In this paper, we propose a relay-assisted cooperative FL scheme to effectively address the straggler issue. In particular, we deploy multiple half-duplex relays to cooperatively assist the devices in uploading the local model updates to the edge server. The nature of the over-the-air computation poses system objectives and constraints that are distinct from those in traditional relay communication systems. Moreover, the strong coupling between the design variables renders the optimization of such a system challenging. To tackle the issue, we propose an alternating-optimization-based algorithm to optimize the transceiver and relay operation with low complexity. Then, we analyze the model aggregation error in a single-relay case and show that our relay-assisted scheme achieves a smaller error than the one without relays provided that the relay transmit power and the relay channel gains are sufficiently large. The analysis provides critical insights on relay deployment in the implementation of cooperative FL. Extensive numerical results show that our design achieves faster convergence compared with state-of-the-art schemes.
Predicting Node failure in cloud service systems. In recent years, many traditional software systems have migrated to cloud computing platforms and are provided as online services. The service quality matters because system failures could seriously affect business and user experience. A cloud service system typically contains a large number of computing nodes. In reality, nodes may fail and affect service availability. In this paper, we propose a failure prediction technique, which can predict the failure-proneness of a node in a cloud service system based on historical data, before node failure actually happens. The ability to predict faulty nodes enables the allocation and migration of virtual machines to the healthy nodes, therefore improving service availability. Predicting node failure in cloud service systems is challenging, because a node failure could be caused by a variety of reasons and reflected by many temporal and spatial signals. Furthermore, the failure data is highly imbalanced. To tackle these challenges, we propose MING, a novel technique that combines: 1) a LSTM model to incorporate the temporal data, 2) a Random Forest model to incorporate spatial data; 3) a ranking model that embeds the intermediate results of the two models as feature inputs and ranks the nodes by their failure-proneness, 4) a cost-sensitive function to identify the optimal threshold for selecting the faulty nodes. We evaluate our approach using real-world data collected from a cloud service system. The results confirm the effectiveness of the proposed approach. We have also successfully applied the proposed approach in real industrial practice.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001848
0.001136
0.000756
0.000582
0.000484
0.000356
0.000273
0.00016
0.000081
0.000059
0.000049
0.000044
0.000043
CNN Features Off-the-Shelf: An Astounding Baseline for Recognition Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the OverFeat network which was trained to perform object classification on ILSVRC13. We use features extracted from the OverFeat network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the OverFeat network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or L2 distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Relay-Assisted Cooperative Federated Learning Federated learning (FL) has recently emerged as a promising technology to enable artificial intelligence (AI) at the network edge, where distributed mobile devices collaboratively train a shared AI model under the coordination of an edge server. To significantly improve the communication efficiency of FL, over-the-air computation allows a large number of mobile devices to concurrently upload their local models by exploiting the superposition property of wireless multi-access channels. Due to wireless channel fading, the model aggregation error at the edge server is dominated by the weakest channel among all devices, causing severe straggler issues. In this paper, we propose a relay-assisted cooperative FL scheme to effectively address the straggler issue. In particular, we deploy multiple half-duplex relays to cooperatively assist the devices in uploading the local model updates to the edge server. The nature of the over-the-air computation poses system objectives and constraints that are distinct from those in traditional relay communication systems. Moreover, the strong coupling between the design variables renders the optimization of such a system challenging. To tackle the issue, we propose an alternating-optimization-based algorithm to optimize the transceiver and relay operation with low complexity. Then, we analyze the model aggregation error in a single-relay case and show that our relay-assisted scheme achieves a smaller error than the one without relays provided that the relay transmit power and the relay channel gains are sufficiently large. The analysis provides critical insights on relay deployment in the implementation of cooperative FL. Extensive numerical results show that our design achieves faster convergence compared with state-of-the-art schemes.
DMM: fast map matching for cellular data Map matching for cellular data is to transform a sequence of cell tower locations to a trajectory on a road map. It is an essential processing step for many applications, such as traffic optimization and human mobility analysis. However, most current map matching approaches are based on Hidden Markov Models (HMMs) that have heavy computation overhead to consider high-order cell tower information. This paper presents a fast map matching framework for cellular data, named as DMM, which adopts a recurrent neural network (RNN) to identify the most-likely trajectory of roads given a sequence of cell towers. Once the RNN model is trained, it can process cell tower sequences as making RNN inference, resulting in fast map matching speed. To transform DMM into a practical system, several challenges are addressed by developing a set of techniques, including spatial-aware representation of input cell tower sequences, an encoder-decoder framework for map matching model with variable-length input and output, and a reinforcement learning based model for optimizing the matched outputs. Extensive experiments on a large-scale anonymized cellular dataset reveal that DMM provides high map matching accuracy (precision 80.43% and recall 85.42%) and reduces the average inference time of HMM-based approaches by 46.58×.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001874
0.001152
0.000767
0.00059
0.00049
0.00036
0.000277
0.000163
0.000082
0.000059
0.00005
0.000044
0.000043
Handover schemes in satellite networks: state-of-the-art and future research directions First Page of the Article
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
Evolutionary computation: comments on the history and current state Evolutionary computation has started to receive significant attention during the last decade, although the origins can be traced back to the late 1950's. This article surveys the history as well as the current state of this rapidly growing field. We describe the purpose, the general structure, and the working principles of different approaches, including genetic algorithms (GA) (with links to genetic programming (GP) and classifier systems (CS)), evolution strategies (ES), and evolutionary programming (EP) by analysis and comparison of their most important constituents (i.e. representations, variation operators, reproduction, and selection mechanism). Finally, we give a brief overview on the manifold of application domains, although this necessarily must remain incomplete
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Non-Strict Cache Coherence: Exploiting Data-Race Tolerance in Emerging Applications Software distributed shared memory (DSM) platforms on networks of workstations tolerate large network latencies by employing one of several weak memory consistency models. Data-race tolerant applications, such as Genetic Algorithms (GAs), Probabilistic Inference, etc., offer an additional degree of freedom to tolerate network latency: they do not synchronize shared memory references, and behave correctly when supplied outdated shared data. However, these algorithms often have a high communication-to-computation ratio and can flood the network with messages in the presence of large message delays. We study the performance of controlled asynchronous implementations of these algorithms via the use of our previously proposed blocking Global Read memory access primitive. Global Read implements non-strict cache coherence by guaranteeing to return to the reader a shared datum value from within a specified staleness range. Experiments on an IBM SP2 multicomputer with an Ethernet show significant performance improvements for controlled asynchronous implementations. On a lightly loaded Ethernet network, most of the GA benchmarks see 30% to 40% improvement over the best competitor for 2 to 16 processors, while two of the Probabilistic Inference benchmarks see more than 80% improvement for two processors. As the network, load in-creases, the benefits of non-strict cache coherence increase significantly.
A Heuristic Model For Dynamic Flexible Job Shop Scheduling Problem Considering Variable Processing Times In real scheduling problems, unexpected changes may occur frequently such as changes in task features. These changes cause deviation from primary scheduling. In this article, a heuristic model, inspired from Artificial Bee Colony algorithm, is proposed for a dynamic flexible job-shop scheduling (DFJSP) problem. This problem consists of n jobs that should be processed by m machines and the processing time of jobs deviates from estimated times. The objective is near-optimal scheduling after any change in tasks in order to minimise the maximal completion time (Makespan). In the proposed model, first, scheduling is done according to the estimated processing times and then re-scheduling is performed after determining the exact ones considering machine set-up. In order to evaluate the performance of the proposed model, some numerical experiments are designed in small, medium and large sizes in different levels of changes in processing times and statistical results illustrate the efficiency of the proposed algorithm.
Cooperative coevolution of Elman recurrent neural networks for chaotic time series prediction Cooperative coevolution decomposes a problem into subcomponents and employs evolutionary algorithms for solving them. Cooperative coevolution has been effective for evolving neural networks. Different problem decomposition methods in cooperative coevolution determine how a neural network is decomposed and encoded which affects its performance. A good problem decomposition method should provide enough diversity and also group interacting variables which are the synapses in the neural network. Neural networks have shown promising results in chaotic time series prediction. This work employs two problem decomposition methods for training Elman recurrent neural networks on chaotic time series problems. The Mackey-Glass, Lorenz and Sunspot time series are used to demonstrate the performance of the cooperative neuro-evolutionary methods. The results show improvement in performance in terms of accuracy when compared to some of the methods from literature.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.00261
0.001604
0.001068
0.000822
0.000683
0.000502
0.000385
0.000227
0.000114
0.000083
0.00007
0.000062
0.00006
A new GA-Local Search Hybrid for Continuous Optimization Based on Multi-Level Single Linkage Clustering Hybrid algorithms formed by the combination of Genetic Algorithms with Local Search methods provide increased performances when compared to real coded GA or Local Search alone. However, the cost of Local Search can be rather high. In this paper we present a new hybrid algorithm which reduces the total cost of local search by avoiding the start of the method in basins of attraction where a local optimum has already been discovered. Additionally, the clustering information can be used to help the maintenance of diversity within the population.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
A Heuristic Model For Dynamic Flexible Job Shop Scheduling Problem Considering Variable Processing Times In real scheduling problems, unexpected changes may occur frequently such as changes in task features. These changes cause deviation from primary scheduling. In this article, a heuristic model, inspired from Artificial Bee Colony algorithm, is proposed for a dynamic flexible job-shop scheduling (DFJSP) problem. This problem consists of n jobs that should be processed by m machines and the processing time of jobs deviates from estimated times. The objective is near-optimal scheduling after any change in tasks in order to minimise the maximal completion time (Makespan). In the proposed model, first, scheduling is done according to the estimated processing times and then re-scheduling is performed after determining the exact ones considering machine set-up. In order to evaluate the performance of the proposed model, some numerical experiments are designed in small, medium and large sizes in different levels of changes in processing times and statistical results illustrate the efficiency of the proposed algorithm.
DMM: fast map matching for cellular data Map matching for cellular data is to transform a sequence of cell tower locations to a trajectory on a road map. It is an essential processing step for many applications, such as traffic optimization and human mobility analysis. However, most current map matching approaches are based on Hidden Markov Models (HMMs) that have heavy computation overhead to consider high-order cell tower information. This paper presents a fast map matching framework for cellular data, named as DMM, which adopts a recurrent neural network (RNN) to identify the most-likely trajectory of roads given a sequence of cell towers. Once the RNN model is trained, it can process cell tower sequences as making RNN inference, resulting in fast map matching speed. To transform DMM into a practical system, several challenges are addressed by developing a set of techniques, including spatial-aware representation of input cell tower sequences, an encoder-decoder framework for map matching model with variable-length input and output, and a reinforcement learning based model for optimizing the matched outputs. Extensive experiments on a large-scale anonymized cellular dataset reveal that DMM provides high map matching accuracy (precision 80.43% and recall 85.42%) and reduces the average inference time of HMM-based approaches by 46.58×.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.002429
0.001493
0.000994
0.000765
0.000635
0.000467
0.000359
0.000211
0.000106
0.000077
0.000065
0.000057
0.000056
Evolutionary Modeling of Systems of Ordinary Differential Equations with Genetic Programming This paper describes an approach to the evolutionary modeling problem of ordinary differential equations including systems of ordinary differential equations and higher-order differential equations. Hybrid evolutionary modeling algorithms are presented to implement the automatic modeling of one- and multi-dimensional dynamic systems respectively. The main idea of the method is to embed a genetic algorithm in genetic programming where the latter is employed to discover and optimize the structure of a model, while the former is employed to optimize its parameters. A number of practical examples are used to demonstrate the effectiveness of the approach. Experimental results show that the algorithm has some advantages over most available modeling methods.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
DMM: fast map matching for cellular data Map matching for cellular data is to transform a sequence of cell tower locations to a trajectory on a road map. It is an essential processing step for many applications, such as traffic optimization and human mobility analysis. However, most current map matching approaches are based on Hidden Markov Models (HMMs) that have heavy computation overhead to consider high-order cell tower information. This paper presents a fast map matching framework for cellular data, named as DMM, which adopts a recurrent neural network (RNN) to identify the most-likely trajectory of roads given a sequence of cell towers. Once the RNN model is trained, it can process cell tower sequences as making RNN inference, resulting in fast map matching speed. To transform DMM into a practical system, several challenges are addressed by developing a set of techniques, including spatial-aware representation of input cell tower sequences, an encoder-decoder framework for map matching model with variable-length input and output, and a reinforcement learning based model for optimizing the matched outputs. Extensive experiments on a large-scale anonymized cellular dataset reveal that DMM provides high map matching accuracy (precision 80.43% and recall 85.42%) and reduces the average inference time of HMM-based approaches by 46.58×.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001893
0.001163
0.000775
0.000596
0.000495
0.000364
0.000279
0.000164
0.000083
0.00006
0.00005
0.000045
0.000044
Privacy-Preserving Traffic Flow Prediction: A Federated Learning Approach Existing traffic flow forecasting approaches by deep learning models achieve excellent success based on a large volume of data sets gathered by governments and organizations. However, these data sets may contain lots of user's private data, which is challenging the current prediction approaches as user privacy is calling for the public concern in recent years. Therefore, how to develop accurate traffic prediction while preserving privacy is a significant problem to be solved, and there is a tradeoff between these two objectives. To address this challenge, we introduce a privacy-preserving machine learning technique named federated learning (FL) and propose an FL-based gated recurrent unit neural network algorithm (FedGRU) for traffic flow prediction (TFP). FedGRU differs from current centralized learning methods and updates universal learning models through a secure parameter aggregation mechanism rather than directly sharing raw data among organizations. In the secure parameter aggregation mechanism, we adopt a federated averaging algorithm to reduce the communication overhead during the model parameter transmission process. Furthermore, we design a joint announcement protocol to improve the scalability of FedGRU. We also propose an ensemble clustering-based scheme for TFP by grouping the organizations into clusters before applying the FedGRU algorithm. Extensive case studies on a real-world data set demonstrate that FedGRU can produce predictions that are merely 0.76 km/h worse than the state of the art in terms of mean average error under the privacy preservation constraint, confirming that the proposed model develops accurate traffic predictions without compromising the data privacy.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
A Heuristic Model For Dynamic Flexible Job Shop Scheduling Problem Considering Variable Processing Times In real scheduling problems, unexpected changes may occur frequently such as changes in task features. These changes cause deviation from primary scheduling. In this article, a heuristic model, inspired from Artificial Bee Colony algorithm, is proposed for a dynamic flexible job-shop scheduling (DFJSP) problem. This problem consists of n jobs that should be processed by m machines and the processing time of jobs deviates from estimated times. The objective is near-optimal scheduling after any change in tasks in order to minimise the maximal completion time (Makespan). In the proposed model, first, scheduling is done according to the estimated processing times and then re-scheduling is performed after determining the exact ones considering machine set-up. In order to evaluate the performance of the proposed model, some numerical experiments are designed in small, medium and large sizes in different levels of changes in processing times and statistical results illustrate the efficiency of the proposed algorithm.
Predicting Node failure in cloud service systems. In recent years, many traditional software systems have migrated to cloud computing platforms and are provided as online services. The service quality matters because system failures could seriously affect business and user experience. A cloud service system typically contains a large number of computing nodes. In reality, nodes may fail and affect service availability. In this paper, we propose a failure prediction technique, which can predict the failure-proneness of a node in a cloud service system based on historical data, before node failure actually happens. The ability to predict faulty nodes enables the allocation and migration of virtual machines to the healthy nodes, therefore improving service availability. Predicting node failure in cloud service systems is challenging, because a node failure could be caused by a variety of reasons and reflected by many temporal and spatial signals. Furthermore, the failure data is highly imbalanced. To tackle these challenges, we propose MING, a novel technique that combines: 1) a LSTM model to incorporate the temporal data, 2) a Random Forest model to incorporate spatial data; 3) a ranking model that embeds the intermediate results of the two models as feature inputs and ranks the nodes by their failure-proneness, 4) a cost-sensitive function to identify the optimal threshold for selecting the faulty nodes. We evaluate our approach using real-world data collected from a cloud service system. The results confirm the effectiveness of the proposed approach. We have also successfully applied the proposed approach in real industrial practice.
Application Task Allocation in Cognitive IoT: A Reward-Driven Game Theoretical Approach In this study we consider the scenario of sensors belonging to different platforms and owned by different owners that join the efforts in an opportunistic way to improve the overall sensing capabilities in a given geographical area by forming clusters of nodes. The considered nodes have cognitive radio and exploit device-to-device communications. A solution is proposed which relies on a Cluster Head (CH) that guides the whole task allocation strategy. The addressed challenges are the following: i) collaborative spectrum sensing for effective communications within the cluster; ii) assignment of each request of sensing tasks to a single node in the cluster. The first challenge is addressed by proposing a collaborative sensing procedure where each node communicates to the CH the received signal energy of licensed users so that the latter makes a decision on the availability of the band by fusing the received information towards a minimisation of the uncertainty in detecting the free spectrum. The second challenge is addressed by proposing a non-cooperative Game theory based approach in which cluster nodes make effort to selfishly increase utility by winning the task. Each node takes part to the competition by considering two elements: the gain that is won for its contribution to sensing and for the execution of the task (in case it wins the competition); the cost in terms of energy to be consumed in case the task is executed. A Nash Equilibrium Point (NEP) is found for the aforementioned game in which each object has no incentive to deviate uni-laterally from the NEP. Extensive simulations are performed to evaluate the impact of probability of false alarm, utility function weighting factors and presence of licensed users on the cumulative system utility.
1
0.002144
0.001318
0.000878
0.000675
0.000561
0.000412
0.000317
0.000186
0.000094
0.000068
0.000057
0.000051
0.00005
Gradient-Based Learning Applied to Document Recognition Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper rev...
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Relay-Assisted Cooperative Federated Learning Federated learning (FL) has recently emerged as a promising technology to enable artificial intelligence (AI) at the network edge, where distributed mobile devices collaboratively train a shared AI model under the coordination of an edge server. To significantly improve the communication efficiency of FL, over-the-air computation allows a large number of mobile devices to concurrently upload their local models by exploiting the superposition property of wireless multi-access channels. Due to wireless channel fading, the model aggregation error at the edge server is dominated by the weakest channel among all devices, causing severe straggler issues. In this paper, we propose a relay-assisted cooperative FL scheme to effectively address the straggler issue. In particular, we deploy multiple half-duplex relays to cooperatively assist the devices in uploading the local model updates to the edge server. The nature of the over-the-air computation poses system objectives and constraints that are distinct from those in traditional relay communication systems. Moreover, the strong coupling between the design variables renders the optimization of such a system challenging. To tackle the issue, we propose an alternating-optimization-based algorithm to optimize the transceiver and relay operation with low complexity. Then, we analyze the model aggregation error in a single-relay case and show that our relay-assisted scheme achieves a smaller error than the one without relays provided that the relay transmit power and the relay channel gains are sufficiently large. The analysis provides critical insights on relay deployment in the implementation of cooperative FL. Extensive numerical results show that our design achieves faster convergence compared with state-of-the-art schemes.
DMM: fast map matching for cellular data Map matching for cellular data is to transform a sequence of cell tower locations to a trajectory on a road map. It is an essential processing step for many applications, such as traffic optimization and human mobility analysis. However, most current map matching approaches are based on Hidden Markov Models (HMMs) that have heavy computation overhead to consider high-order cell tower information. This paper presents a fast map matching framework for cellular data, named as DMM, which adopts a recurrent neural network (RNN) to identify the most-likely trajectory of roads given a sequence of cell towers. Once the RNN model is trained, it can process cell tower sequences as making RNN inference, resulting in fast map matching speed. To transform DMM into a practical system, several challenges are addressed by developing a set of techniques, including spatial-aware representation of input cell tower sequences, an encoder-decoder framework for map matching model with variable-length input and output, and a reinforcement learning based model for optimizing the matched outputs. Extensive experiments on a large-scale anonymized cellular dataset reveal that DMM provides high map matching accuracy (precision 80.43% and recall 85.42%) and reduces the average inference time of HMM-based approaches by 46.58×.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001869
0.001149
0.000765
0.000588
0.000489
0.00036
0.000276
0.000162
0.000082
0.000059
0.00005
0.000044
0.000043
Continual Lifelong Learning with Neural Networks: A Review. Humans and animals have the ability to continually acquire, fine-tune, and transfer knowledge and skills throughout their lifespan. This ability, referred to as lifelong learning, is mediated by a rich set of neurocognitive mechanisms that together contribute to the development and specialization of our sensorimotor skills as well as to long-term memory consolidation and retrieval. Consequently, lifelong learning capabilities are crucial for computational learning systems and autonomous agents interacting in the real world and processing continuous streams of information. However, lifelong learning remains a long-standing challenge for machine learning and neural network models since the continual acquisition of incrementally available information from non-stationary data distributions generally leads to catastrophic forgetting or interference. This limitation represents a major drawback for state-of-the-art deep neural network models that typically learn representations from stationary batches of training data, thus without accounting for situations in which information becomes incrementally available over time. In this review, we critically summarize the main challenges linked to lifelong learning for artificial learning systems and compare existing neural network approaches that alleviate, to different extents, catastrophic forgetting. Although significant advances have been made in domain-specific learning with neural networks, extensive research efforts are required for the development of robust lifelong learning on autonomous agents and robots. We discuss well-established and emerging research motivated by lifelong learning factors in biological systems such as structural plasticity, memory replay, curriculum and transfer learning, intrinsic motivation, and multisensory integration.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
A Heuristic Model For Dynamic Flexible Job Shop Scheduling Problem Considering Variable Processing Times In real scheduling problems, unexpected changes may occur frequently such as changes in task features. These changes cause deviation from primary scheduling. In this article, a heuristic model, inspired from Artificial Bee Colony algorithm, is proposed for a dynamic flexible job-shop scheduling (DFJSP) problem. This problem consists of n jobs that should be processed by m machines and the processing time of jobs deviates from estimated times. The objective is near-optimal scheduling after any change in tasks in order to minimise the maximal completion time (Makespan). In the proposed model, first, scheduling is done according to the estimated processing times and then re-scheduling is performed after determining the exact ones considering machine set-up. In order to evaluate the performance of the proposed model, some numerical experiments are designed in small, medium and large sizes in different levels of changes in processing times and statistical results illustrate the efficiency of the proposed algorithm.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.002418
0.001486
0.00099
0.000761
0.000633
0.000465
0.000357
0.00021
0.000106
0.000077
0.000064
0.000057
0.000056
Learning without Forgetting. When building a unified vision system or gradually adding new apabilities to a system, the usual assumption is that training data for all tasks is always available. However, as the number of tasks grows, storing and retraining on such data becomes infeasible. A new problem arises where we add new capabilities to a Convolutional Neural Network (CNN), but the training data for its existing capabilit...
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Federated Learning for Channel Estimation in Conventional and RIS-Assisted Massive MIMO Machine learning (ML) has attracted a great research interest for physical layer design problems, such as channel estimation, thanks to its low complexity and robustness. Channel estimation via ML requires model training on a dataset, which usually includes the received pilot signals as input and channel data as output. In previous works, model training is mostly done via centralized learning (CL), where the whole training dataset is collected from the users at the base station (BS). This approach introduces huge communication overhead for data collection. In this paper, to address this challenge, we propose a federated learning (FL) framework for channel estimation. We design a convolutional neural network (CNN) trained on the local datasets of the users without sending them to the BS. We develop FL-based channel estimation schemes for both conventional and RIS (intelligent reflecting surface) assisted massive MIMO (multiple-input multiple-output) systems, where a single CNN is trained for two different datasets for both scenarios. We evaluate the performance for noisy and quantized model transmission and show that the proposed approach provides approximately 16 times lower overhead than CL, while maintaining satisfactory performance close to CL. Furthermore, the proposed architecture exhibits lower estimation error than the state-of-the-art ML-based schemes.
Relay-Assisted Cooperative Federated Learning Federated learning (FL) has recently emerged as a promising technology to enable artificial intelligence (AI) at the network edge, where distributed mobile devices collaboratively train a shared AI model under the coordination of an edge server. To significantly improve the communication efficiency of FL, over-the-air computation allows a large number of mobile devices to concurrently upload their local models by exploiting the superposition property of wireless multi-access channels. Due to wireless channel fading, the model aggregation error at the edge server is dominated by the weakest channel among all devices, causing severe straggler issues. In this paper, we propose a relay-assisted cooperative FL scheme to effectively address the straggler issue. In particular, we deploy multiple half-duplex relays to cooperatively assist the devices in uploading the local model updates to the edge server. The nature of the over-the-air computation poses system objectives and constraints that are distinct from those in traditional relay communication systems. Moreover, the strong coupling between the design variables renders the optimization of such a system challenging. To tackle the issue, we propose an alternating-optimization-based algorithm to optimize the transceiver and relay operation with low complexity. Then, we analyze the model aggregation error in a single-relay case and show that our relay-assisted scheme achieves a smaller error than the one without relays provided that the relay transmit power and the relay channel gains are sufficiently large. The analysis provides critical insights on relay deployment in the implementation of cooperative FL. Extensive numerical results show that our design achieves faster convergence compared with state-of-the-art schemes.
DMM: fast map matching for cellular data Map matching for cellular data is to transform a sequence of cell tower locations to a trajectory on a road map. It is an essential processing step for many applications, such as traffic optimization and human mobility analysis. However, most current map matching approaches are based on Hidden Markov Models (HMMs) that have heavy computation overhead to consider high-order cell tower information. This paper presents a fast map matching framework for cellular data, named as DMM, which adopts a recurrent neural network (RNN) to identify the most-likely trajectory of roads given a sequence of cell towers. Once the RNN model is trained, it can process cell tower sequences as making RNN inference, resulting in fast map matching speed. To transform DMM into a practical system, several challenges are addressed by developing a set of techniques, including spatial-aware representation of input cell tower sequences, an encoder-decoder framework for map matching model with variable-length input and output, and a reinforcement learning based model for optimizing the matched outputs. Extensive experiments on a large-scale anonymized cellular dataset reveal that DMM provides high map matching accuracy (precision 80.43% and recall 85.42%) and reduces the average inference time of HMM-based approaches by 46.58×.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001957
0.001203
0.000801
0.000616
0.000512
0.000376
0.000289
0.00017
0.000085
0.000062
0.000052
0.000046
0.000045
Deep Residual Learning for Image Recognition Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers - 8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.002103
0.001292
0.000861
0.000662
0.00055
0.000404
0.00031
0.000183
0.000092
0.000067
0.000056
0.00005
0.000049
Highly dynamic Destination-Sequenced Distance-Vector routing (DSDV) for mobile computers An ad-hoc network is the cooperative engagement of a collection of Mobile Hosts without the required intervention of any centralized Access Point. In this paper we present an innovative design for the operation of such ad-hoc networks. The basic idea of the design is to operate each Mobile Host as a specialized router, which periodically advertises its view of the interconnection topology with other Mobile Hosts within the network. This amounts to a new sort of routing protocol. We have investigated modifications to the basic Bellman-Ford routing mechanisms, as specified by RIP [5], to make it suitable for a dynamic and self-starting network mechanism as is required by users wishing to utilize ad hoc networks. Our modifications address some of the previous objections to the use of Bellman-Ford, related to the poor looping properties of such algorithms in the face of broken links and the resulting time dependent nature of the interconnection topology describing the links between the Mobile Hosts. Finally, we describe the ways in which the basic network-layer routing can be modified to provide MAC-layer support for ad-hoc networks.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
GRASP: A Search Algorithm for Propositional Satisfiability This paper introduces GRASP (Generic seaRch Algorithm for the Satisfiability Problem), a new search algorithm for Propositional Satisfiability (SAT). GRASP incorporates several search-pruning techniques that proved to be quite powerful on a wide variety of SAT problems. Some of these techniques are specific to SAT, whereas others are similar in spirit to approaches in other fields of Artificial Intelligence. GRASP is premised on the inevitability of conflicts during the search and its most distinguishing feature is the augmentation of basic backtracking search with a powerful conflict analysis procedure. Analyzing conflicts to determine their causes enables GRASP to backtrack nonchronologically to earlier levels in the search tree, potentially pruning large portions of the search space. In addition, by 驴recording驴 the causes of conflicts, GRASP can recognize and preempt the occurrence of similar conflicts later on in the search. Finally, straightforward bookkeeping of the causality chains leading up to conflicts allows GRASP to identify assignments that are necessary for a solution to be found. Experimental results obtained from a large number of benchmarks indicate that application of the proposed conflict analysis techniques to SAT algorithms can be extremely effective for a large number of representative classes of SAT instances.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Security and Privacy Issues in Autonomous Vehicles: A Layer-Based Survey Artificial Intelligence (AI) is changing every technology we are used to deal with. Autonomy has long been a sought-after goal in vehicles, and now more than ever we are very close to that goal. Big auto manufacturers as well are investing billions of dollars to produce Autonomous Vehicles (AVs). This new technology has the potential to provide more safety for passengers, less crowded roads, congestion alleviation, optimized traffic, fuel-saving, less pollution as well as enhanced travel experience among other benefits. But this new paradigm shift comes with newly introduced privacy issues and security concerns. Vehicles before were dumb mechanical devices, now they are becoming smart, computerized, and connected. They collect huge troves of information, which needs to be protected from breaches. In this work, we investigate security challenges and privacy concerns in AVs. We examine different attacks launched in a layer-based approach. We conceptualize the architecture of AVs in a four-layered model. Then, we survey security and privacy attacks and some of the most promising countermeasures to tackle them. Our goal is to shed light on the open research challenges in the area of AVs as well as offer directions for future research.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Relay-Assisted Cooperative Federated Learning Federated learning (FL) has recently emerged as a promising technology to enable artificial intelligence (AI) at the network edge, where distributed mobile devices collaboratively train a shared AI model under the coordination of an edge server. To significantly improve the communication efficiency of FL, over-the-air computation allows a large number of mobile devices to concurrently upload their local models by exploiting the superposition property of wireless multi-access channels. Due to wireless channel fading, the model aggregation error at the edge server is dominated by the weakest channel among all devices, causing severe straggler issues. In this paper, we propose a relay-assisted cooperative FL scheme to effectively address the straggler issue. In particular, we deploy multiple half-duplex relays to cooperatively assist the devices in uploading the local model updates to the edge server. The nature of the over-the-air computation poses system objectives and constraints that are distinct from those in traditional relay communication systems. Moreover, the strong coupling between the design variables renders the optimization of such a system challenging. To tackle the issue, we propose an alternating-optimization-based algorithm to optimize the transceiver and relay operation with low complexity. Then, we analyze the model aggregation error in a single-relay case and show that our relay-assisted scheme achieves a smaller error than the one without relays provided that the relay transmit power and the relay channel gains are sufficiently large. The analysis provides critical insights on relay deployment in the implementation of cooperative FL. Extensive numerical results show that our design achieves faster convergence compared with state-of-the-art schemes.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
Which model to use for cortical spiking neurons? We discuss the biological plausibility and computational efficiency of some of the most useful models of spiking and bursting neurons. We compare their applicability to large-scale simulations of cortical neural networks.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Relay-Assisted Cooperative Federated Learning Federated learning (FL) has recently emerged as a promising technology to enable artificial intelligence (AI) at the network edge, where distributed mobile devices collaboratively train a shared AI model under the coordination of an edge server. To significantly improve the communication efficiency of FL, over-the-air computation allows a large number of mobile devices to concurrently upload their local models by exploiting the superposition property of wireless multi-access channels. Due to wireless channel fading, the model aggregation error at the edge server is dominated by the weakest channel among all devices, causing severe straggler issues. In this paper, we propose a relay-assisted cooperative FL scheme to effectively address the straggler issue. In particular, we deploy multiple half-duplex relays to cooperatively assist the devices in uploading the local model updates to the edge server. The nature of the over-the-air computation poses system objectives and constraints that are distinct from those in traditional relay communication systems. Moreover, the strong coupling between the design variables renders the optimization of such a system challenging. To tackle the issue, we propose an alternating-optimization-based algorithm to optimize the transceiver and relay operation with low complexity. Then, we analyze the model aggregation error in a single-relay case and show that our relay-assisted scheme achieves a smaller error than the one without relays provided that the relay transmit power and the relay channel gains are sufficiently large. The analysis provides critical insights on relay deployment in the implementation of cooperative FL. Extensive numerical results show that our design achieves faster convergence compared with state-of-the-art schemes.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.00186
0.001143
0.000761
0.000585
0.000487
0.000358
0.000275
0.000161
0.000081
0.000059
0.00005
0.000044
0.000043
Simple model of spiking neurons A model is presented that reproduces spiking and bursting behavior of known types of cortical neurons. The model combines the biologically plausibility of Hodgkin-Huxley-type dynamics and the computational efficiency of integrate-and-fire neurons. Using this model, one can simulate tens of thousands of spiking cortical neurons in real time (1 ms resolution) using a desktop PC.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
A survey of dynamic spectrum allocation based on reinforcement learning algorithms in cognitive radio networks Cognitive radio is an emerging technology that is considered to be an evolution for software device radio in which cognition and decision-making components are included. The main function of cognitive radio is to exploit “spectrum holes” or “white spaces” to address the challenge of the low utilization of radio resources. Dynamic spectrum allocation, whose significant functions are to ensure that cognitive users access the available frequency and bandwidth to communicate in an opportunistic manner and to minimize the interference between primary and secondary users, is a key mechanism in cognitive radio networks. Reinforcement learning, which rapidly analyzes the amount of data in a model-free manner, dramatically facilitates the performance of dynamic spectrum allocation in real application scenarios. This paper presents a survey on the state-of-the-art spectrum allocation algorithms based on reinforcement learning techniques in cognitive radio networks. The advantages and disadvantages of each algorithm are analyzed in their specific practical application scenarios. Finally, we discuss open issues in dynamic spectrum allocation that can be topics of future research.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
A tutorial on support vector regression In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing with large datasets. Finally, we mention some modifications and extensions that have been applied to the standard SV algorithm, and discuss the aspect of regularization from a SV perspective.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
DMM: fast map matching for cellular data Map matching for cellular data is to transform a sequence of cell tower locations to a trajectory on a road map. It is an essential processing step for many applications, such as traffic optimization and human mobility analysis. However, most current map matching approaches are based on Hidden Markov Models (HMMs) that have heavy computation overhead to consider high-order cell tower information. This paper presents a fast map matching framework for cellular data, named as DMM, which adopts a recurrent neural network (RNN) to identify the most-likely trajectory of roads given a sequence of cell towers. Once the RNN model is trained, it can process cell tower sequences as making RNN inference, resulting in fast map matching speed. To transform DMM into a practical system, several challenges are addressed by developing a set of techniques, including spatial-aware representation of input cell tower sequences, an encoder-decoder framework for map matching model with variable-length input and output, and a reinforcement learning based model for optimizing the matched outputs. Extensive experiments on a large-scale anonymized cellular dataset reveal that DMM provides high map matching accuracy (precision 80.43% and recall 85.42%) and reduces the average inference time of HMM-based approaches by 46.58×.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001914
0.001177
0.000784
0.000603
0.000501
0.000368
0.000283
0.000166
0.000084
0.000061
0.000051
0.000045
0.000044
Optimization Of Radio And Computational Resources For Energy Efficiency In Latency-Constrained Application Offloading Providing femto access points (FAPs) with computational capabilities will allow (either total or partial) offloading of highly demanding applications from smartphones to the so-called femto-cloud. Such offloading promises to be beneficial in terms of battery savings at the mobile terminal (MT) and/or in latency reduction in the execution of applications. However, for this promise to become a reality, the energy and/or the time required for the communication process must be compensated by the energy and/or the time savings that result from the remote computation at the FAPs. For this problem, we provide in this paper a framework for the joint optimization of the radio and computational resource usage exploiting the tradeoff between energy consumption and latency. Multiple antennas are assumed to be available at the MT and the serving FAP. As a result of the optimization, the optimal communication strategy (e.g., transmission power, rate, and precoder) is obtained, as well as the optimal distribution of the computational load between the handset and the serving FAP. This paper also establishes the conditions under which total or no offloading is optimal, determines which is the minimum affordable latency in the execution of the application, and analyzes, as a particular case, the minimization of the total consumed energy without latency constraints.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Relay-Assisted Cooperative Federated Learning Federated learning (FL) has recently emerged as a promising technology to enable artificial intelligence (AI) at the network edge, where distributed mobile devices collaboratively train a shared AI model under the coordination of an edge server. To significantly improve the communication efficiency of FL, over-the-air computation allows a large number of mobile devices to concurrently upload their local models by exploiting the superposition property of wireless multi-access channels. Due to wireless channel fading, the model aggregation error at the edge server is dominated by the weakest channel among all devices, causing severe straggler issues. In this paper, we propose a relay-assisted cooperative FL scheme to effectively address the straggler issue. In particular, we deploy multiple half-duplex relays to cooperatively assist the devices in uploading the local model updates to the edge server. The nature of the over-the-air computation poses system objectives and constraints that are distinct from those in traditional relay communication systems. Moreover, the strong coupling between the design variables renders the optimization of such a system challenging. To tackle the issue, we propose an alternating-optimization-based algorithm to optimize the transceiver and relay operation with low complexity. Then, we analyze the model aggregation error in a single-relay case and show that our relay-assisted scheme achieves a smaller error than the one without relays provided that the relay transmit power and the relay channel gains are sufficiently large. The analysis provides critical insights on relay deployment in the implementation of cooperative FL. Extensive numerical results show that our design achieves faster convergence compared with state-of-the-art schemes.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
Dynamic Computation Offloading for Mobile-Edge Computing with Energy Harvesting Devices. Mobile-edge computing (MEC) is an emerging paradigm to meet the ever-increasing computation demands from mobile applications. By offloading the computationally intensive workloads to the MEC server, the quality of computation experience, e.g., the execution latency, could be greatly improved. Nevertheless, as the on-device battery capacities are limited, computation would be interrupted when the battery energy runs out. To provide satisfactory computation performance as well as achieving green computing, it is of significant importance to seek renewable energy sources to power mobile devices via energy harvesting (EH) technologies. In this paper, we will investigate a green MEC system with EH devices and develop an effective computation offloading strategy. The execution cost, which addresses both the execution latency and task failure, is adopted as the performance metric. A low-complexity online algorithm is proposed, namely, the Lyapunov optimization-based dynamic computation offloading algorithm, which jointly decides the offloading decision, the CPU-cycle frequencies for mobile execution, and the transmit power for computation offloading. A unique advantage of this algorithm is that the decisions depend only on the current system state without requiring distribution information of the computation task request, wireless channel, and EH processes. The implementation of the algorithm only requires to solve a deterministic problem in each time slot, for which the optimal solution can be obtained either in closed form or by bisection search. Moreover, the proposed algorithm is shown to be asymptotically optimal via rigorous analysis. Sample simulation results shall be presented to corroborate the theoretical analysis as well as validate the effectiveness of the proposed algorithm.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
A Heuristic Model For Dynamic Flexible Job Shop Scheduling Problem Considering Variable Processing Times In real scheduling problems, unexpected changes may occur frequently such as changes in task features. These changes cause deviation from primary scheduling. In this article, a heuristic model, inspired from Artificial Bee Colony algorithm, is proposed for a dynamic flexible job-shop scheduling (DFJSP) problem. This problem consists of n jobs that should be processed by m machines and the processing time of jobs deviates from estimated times. The objective is near-optimal scheduling after any change in tasks in order to minimise the maximal completion time (Makespan). In the proposed model, first, scheduling is done according to the estimated processing times and then re-scheduling is performed after determining the exact ones considering machine set-up. In order to evaluate the performance of the proposed model, some numerical experiments are designed in small, medium and large sizes in different levels of changes in processing times and statistical results illustrate the efficiency of the proposed algorithm.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.002153
0.001323
0.000881
0.000678
0.000563
0.000414
0.000318
0.000187
0.000094
0.000068
0.000057
0.000051
0.00005
On Multi-Access Edge Computing: A Survey of the Emerging 5G Network Edge Cloud Architecture and Orchestration. Multi-access edge computing (MEC) is an emerging ecosystem, which aims at converging telecommunication and IT services, providing a cloud computing platform at the edge of the radio access network. MEC offers storage and computational resources at the edge, reducing latency for mobile end users and utilizing more efficiently the mobile backhaul and core networks. This paper introduces a survey on ...
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
DMM: fast map matching for cellular data Map matching for cellular data is to transform a sequence of cell tower locations to a trajectory on a road map. It is an essential processing step for many applications, such as traffic optimization and human mobility analysis. However, most current map matching approaches are based on Hidden Markov Models (HMMs) that have heavy computation overhead to consider high-order cell tower information. This paper presents a fast map matching framework for cellular data, named as DMM, which adopts a recurrent neural network (RNN) to identify the most-likely trajectory of roads given a sequence of cell towers. Once the RNN model is trained, it can process cell tower sequences as making RNN inference, resulting in fast map matching speed. To transform DMM into a practical system, several challenges are addressed by developing a set of techniques, including spatial-aware representation of input cell tower sequences, an encoder-decoder framework for map matching model with variable-length input and output, and a reinforcement learning based model for optimizing the matched outputs. Extensive experiments on a large-scale anonymized cellular dataset reveal that DMM provides high map matching accuracy (precision 80.43% and recall 85.42%) and reduces the average inference time of HMM-based approaches by 46.58×.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.002083
0.00128
0.000853
0.000656
0.000545
0.000401
0.000308
0.000181
0.000091
0.000066
0.000056
0.000049
0.000048
General Inner Approximation Algorithm For Non-Convex Mathematical Programs Inner approximation algorithms have had two major roles in the mathematical programming literature. Their first role was in the construction of algorithms for the decomposition of large-scale mathe...
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042