Query Text
stringlengths
10
36.2k
Ranking 1
stringlengths
29
3.76k
Ranking 2
stringlengths
35
36.2k
Ranking 3
stringlengths
20
36.2k
Ranking 4
stringlengths
20
3.13k
Ranking 5
stringlengths
20
36.2k
Ranking 6
stringlengths
20
36.2k
Ranking 7
stringlengths
12
36.2k
Ranking 8
stringlengths
13
36.2k
Ranking 9
stringlengths
12
36.2k
Ranking 10
stringlengths
12
36.2k
Ranking 11
stringlengths
19
3.56k
Ranking 12
stringlengths
12
4.86k
Ranking 13
stringlengths
19
4.57k
score_0
float64
1
2,503,614,422,727,750,000,000,000,000,000,000B
score_1
float64
-1,774,251,208,094,388,000,000,000,000,000,000,000,000,000
1,449,686,829,779,149,500,000,000,000,000,000B
score_2
float64
-1,775,809,332,270,340,600,000,000,000,000,000,000,000,000
1,252,100,180,433,478,800,000,000,000,000,000B
score_3
float64
-1,778,303,553,520,916,000,000,000,000,000,000,000,000,000
1,035,078,120,488,042,400,000,000,000,000,000B
score_4
float64
-1,779,345,012,302,625,800,000,000,000,000,000,000,000,000
862,766,939,159,153,400,000,000,000,000,000B
score_5
float64
-1,779,355,035,320,888,800,000,000,000,000,000,000,000,000
582,199,894,699,651,300,000,000,000,000,000B
score_6
float64
-1,779,355,047,010,457,300,000,000,000,000,000,000,000,000
492,450,637,876,531,360,000,000,000,000,000B
score_7
float64
-1,779,355,047,659,752,800,000,000,000,000,000,000,000,000
366,958,150,342,842,800,000,000,000,000,000B
score_8
float64
-1,779,355,047,660,214,600,000,000,000,000,000,000,000,000
225,233,376,751,889,840,000,000,000,000,000B
score_9
float64
-1,779,355,047,660,215,000,000,000,000,000,000,000,000,000
102,699,465,831,101,860,000,000,000,000,000B
score_10
float64
-1,779,355,047,660,215,000,000,000,000,000,000,000,000,000
76,065,276,860,173,150,000,000,000,000,000B
score_11
float64
-1,779,355,047,660,215,000,000,000,000,000,000,000,000,000
66,177,685,061,550,990,000,000,000,000,000B
score_12
float64
-1,779,355,047,660,215,000,000,000,000,000,000,000,000,000
58,318,764,282,646,120,000,000,000,000,000B
score_13
float64
-2,081,788,007,204,393,000,000,000,000,000,000,000,000,000
57,465,504,162,802,390,000,000,000,000,000B
New directions in cryptography Two kinds of contemporary developments in cryptography are examined. Widening applications of teleprocessing have given rise to a need for new types of cryptographic systems, which minimize the need for secure key distribution channels and supply the equivalent of a written signature. This paper suggests ways to solve these currently open problems. It also discusses how the theories of communication and computation are beginning to provide the tools to solve cryptographic problems of long standing.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
A Comparative Analysis of Selection Schemes Used in Genetic Algorithms This paper considers a number of selection schemes commonly used in modern genetic algorithms. Specifically, proportionate reproduction, rank- ing selection, tournament selection, and Genitor (or «steady state") selec- tion are compared on the basis of solutions to deterministic difference or differential equations, which are verified through computer simulations. The analysis provides convenient approximate or exact solutions as well as useful convergence time and growth ratio estimates. The paper rec- ommends practical application of the analyses and suggests a number of paths for more detailed analytical investigation of selection techniques.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Memetic Algorithms for Continuous Optimisation Based on Local Search Chains Memetic algorithms with continuous local search methods have arisen as effective tools to address the difficulty of obtaining reliable solutions of high precision for complex continuous optimisation problems. There exists a group of continuous local search algorithms that stand out as exceptional local search optimisers. However, on some occasions, they may become very expensive, because of the way they exploit local information to guide the search process. In this paper, they are called intensive continuous local search methods. Given the potential of this type of local optimisation methods, it is interesting to build prospective memetic algorithm models with them. This paper presents the concept of local search chain as a springboard to design memetic algorithm approaches that can effectively use intense continuous local search methods as local search operators. Local search chain concerns the idea that, at one stage, the local search operator may continue the operation of a previous invocation, starting from the final configuration (initial solution, strategy parameter values, internal variables, etc.) reached by this one. The proposed memetic algorithm favours the formation of local search chains during the memetic algorithm run with the aim of concentrating local tuning in search regions showing promise. In order to study the performance of the new memetic algorithm model, an instance is implemented with CMA-ES as an intense local search method. The benefits of the proposal in comparison to other kinds of memetic algorithms and evolutionary algorithms proposed in the literature to deal with continuous optimisation problems are experimentally shown. Concretely, the empirical study reveals a clear superiority when tackling high-dimensional problems.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.00186
0.001143
0.000761
0.000586
0.000487
0.000358
0.000275
0.000162
0.000081
0.000059
0.00005
0.000044
0.000043
Graph-Based Algorithms for Boolean Function Manipulation In this paper we present a new data structure for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner similar to the representations introduced by Lee [1] and Akers [2], but with further restrictions on the ordering of decision variables in the graph. Although a function requires, in the worst case, a graph of size exponential in the number of arguments, many of the functions encountered in typical applications have a more reasonable representation. Our algorithms have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large. We present experimental results from applying these algorithms to problems in logic design verification that demonstrate the practicality of our approach.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
Dynamic program slicing Program slices are useful in debugging, testing, maintenance, and understanding of programs. The conventional notion of a program slice, the static slice, is the set of all statements that might affect the value of a given variable occurrence. In this paper, we investigate the concept of the dynamic slice consisting of all statements that actually affect the value of a variable occurrence for a given program input. The sensitivity of dynamic slicing to particular program inputs makes it more useful in program debugging and testing than static slicing. Several approaches for computing dynamic slices are examined. The notion of a Dynamic Dependence Graph and its use in computing dynamic slices is discussed. The Dynamic Dependence Graph may be unbounded in length; therefore, we introduce the economical concept of a Reduced Dynamic Dependence Graph, which is proportional in size to the number of dynamic slices arising during the program execution.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
The nature of statistical learning theory~. First Page of the Article
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
A model of saliency-based visual attention for rapid scene analysis A visual attention system, inspired by the behavior and the neuronal architecture of the early primate visual system, is presented. Multiscale image features are combined into a single topographical saliency map. A dynamical neural network then selects attended locations in order of decreasing saliency. The system breaks down the complex problem of scene understanding by rapidly selecting, in a computationally efficient manner, conspicuous locations to be analyzed in detail.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
DMM: fast map matching for cellular data Map matching for cellular data is to transform a sequence of cell tower locations to a trajectory on a road map. It is an essential processing step for many applications, such as traffic optimization and human mobility analysis. However, most current map matching approaches are based on Hidden Markov Models (HMMs) that have heavy computation overhead to consider high-order cell tower information. This paper presents a fast map matching framework for cellular data, named as DMM, which adopts a recurrent neural network (RNN) to identify the most-likely trajectory of roads given a sequence of cell towers. Once the RNN model is trained, it can process cell tower sequences as making RNN inference, resulting in fast map matching speed. To transform DMM into a practical system, several challenges are addressed by developing a set of techniques, including spatial-aware representation of input cell tower sequences, an encoder-decoder framework for map matching model with variable-length input and output, and a reinforcement learning based model for optimizing the matched outputs. Extensive experiments on a large-scale anonymized cellular dataset reveal that DMM provides high map matching accuracy (precision 80.43% and recall 85.42%) and reduces the average inference time of HMM-based approaches by 46.58×.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001885
0.001159
0.000772
0.000593
0.000493
0.000363
0.000278
0.000164
0.000082
0.00006
0.00005
0.000045
0.000044
Threaded code The concept of “threaded code” is presented as an alternative to machine language code. Hardware and software realizations of it are given. In software it is realized as interpretive code not needing an interpreter. Extensions and optimizations are mentioned.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
Towards Developing High Performance RISC-V Processors Using Agile Methodology While research has shown that the agile chip design methodology is promising to sustain the scaling of computing performance in a more efficient way, it is still of limited usage in actual applications due to two major obstacles: 1) Lack of tool-chain and developing framework supporting agile chip design, especially for large-scale modern processors. 2) The conventional verification methods are less agile and become a major bottleneck of the entire process. To tackle both issues, we propose MINJIE, an open-source platform supporting agile processor development flow. MINJIE integrates a broad set of tools for logic design, functional verification, performance modelling, pre-silicon validation and debugging for better development efficiency of state-of-the-art processor designs. We demonstrate the usage and effectiveness of MINJIE by building two generations of an open-source superscalar out-of-order RISC-V processor code-named XIANGSHAN using agile methodologies. We quantify the performance of XIANGSHAN using SPEC CPU2006 benchmarks and demonstrate that XIANGSHAN achieves industry-competitive performance.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000575
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
Signature Schemes and Anonymous Credentials from Bilinear Maps We propose a new and efficient signature scheme that is provably secure in the plain model. The security of our scheme is based on a discrete-logarithm-based assumption put forth by Lysyanskaya, Rivest, Sahai, and Wolf (LRSW) who also showed that it holds for generic groups and is independent of the decisional Diffie-Hellman assumption. We prove security of our scheme under the LRSW assumption for groups with bi-linear maps. We then show how our scheme can be used to construct efficient anonymous credential systems as well as group signature and identity escrow schemes. To this end, we provide efficient protocols that allow one to prove in zero-knowledge the knowledge of a signature on a committed (or encrypted) message and to obtain a signature on a committed message.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Relay-Assisted Cooperative Federated Learning Federated learning (FL) has recently emerged as a promising technology to enable artificial intelligence (AI) at the network edge, where distributed mobile devices collaboratively train a shared AI model under the coordination of an edge server. To significantly improve the communication efficiency of FL, over-the-air computation allows a large number of mobile devices to concurrently upload their local models by exploiting the superposition property of wireless multi-access channels. Due to wireless channel fading, the model aggregation error at the edge server is dominated by the weakest channel among all devices, causing severe straggler issues. In this paper, we propose a relay-assisted cooperative FL scheme to effectively address the straggler issue. In particular, we deploy multiple half-duplex relays to cooperatively assist the devices in uploading the local model updates to the edge server. The nature of the over-the-air computation poses system objectives and constraints that are distinct from those in traditional relay communication systems. Moreover, the strong coupling between the design variables renders the optimization of such a system challenging. To tackle the issue, we propose an alternating-optimization-based algorithm to optimize the transceiver and relay operation with low complexity. Then, we analyze the model aggregation error in a single-relay case and show that our relay-assisted scheme achieves a smaller error than the one without relays provided that the relay transmit power and the relay channel gains are sufficiently large. The analysis provides critical insights on relay deployment in the implementation of cooperative FL. Extensive numerical results show that our design achieves faster convergence compared with state-of-the-art schemes.
DMM: fast map matching for cellular data Map matching for cellular data is to transform a sequence of cell tower locations to a trajectory on a road map. It is an essential processing step for many applications, such as traffic optimization and human mobility analysis. However, most current map matching approaches are based on Hidden Markov Models (HMMs) that have heavy computation overhead to consider high-order cell tower information. This paper presents a fast map matching framework for cellular data, named as DMM, which adopts a recurrent neural network (RNN) to identify the most-likely trajectory of roads given a sequence of cell towers. Once the RNN model is trained, it can process cell tower sequences as making RNN inference, resulting in fast map matching speed. To transform DMM into a practical system, several challenges are addressed by developing a set of techniques, including spatial-aware representation of input cell tower sequences, an encoder-decoder framework for map matching model with variable-length input and output, and a reinforcement learning based model for optimizing the matched outputs. Extensive experiments on a large-scale anonymized cellular dataset reveal that DMM provides high map matching accuracy (precision 80.43% and recall 85.42%) and reduces the average inference time of HMM-based approaches by 46.58×.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.002015
0.001238
0.000825
0.000634
0.000527
0.000388
0.000297
0.000175
0.000088
0.000064
0.000054
0.000048
0.000047
NP-complete scheduling problems We show that the problem of finding an optimal schedule for a set of jobs is NP-complete even in the following two restricted cases.o(1)All jobs require one time unit. (2)All jobs require one or two time units, and there are only two processor resolving (in the negative a conjecture of R. L. Graham, Proc. SJCC, 1972, pp. 205-218). As a consequence, the general preemptive scheduling problem is also NP-complete. These results are tantamount to showing that the scheduling problems mentioned are intractable.
Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicles Digital Twin (DT) is an emerging technology and has been introduced into intelligent driving and transportation systems to digitize and synergize connected automated vehicles. However, existing studies focus on the design of the automated vehicle, whereas the digitization of the human driver, who plays an important role in driving, is largely ignored. Furthermore, previous driver-related tasks are limited to specific scenarios and have limited applicability. Thus, a novel concept of a driver digital twin (DDT) is proposed in this study to bridge the gap between existing automated driving systems and fully digitized ones and aid in the development of a complete driving human cyber-physical system (H-CPS). This concept is essential for constructing a harmonious human-centric intelligent driving system that considers the proactivity and sensitivity of the human driver. The primary characteristics of the DDT include multimodal state fusion, personalized modeling, and time variance. Compared with the original DT, the proposed DDT emphasizes on internal personality and capability with respect to the external physiological-level state. This study systematically illustrates the DDT and outlines its key enabling aspects. The related technologies are comprehensively reviewed and discussed with a view to improving them by leveraging the DDT. In addition, the potential applications and unsettled challenges are considered. This study aims to provide fundamental theoretical support to researchers in determining the future scope of the DDT system
A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthrough in wireless power transfer (WPT) technology has empowered wireless rechargeable sensor networks (WRSNs) by facilitating stable and continuous energy supply to sensors through mobile chargers (MCs). A plethora of studies have been carried out over the last decade in this regard. However, no comprehensive survey exists to compile the state-of-the-art literature and provide insight into future research directions. To fill this gap, we put forward a detailed survey on mobile charging techniques (MCTs) in WRSNs. In particular, we first describe the network model, various WPT techniques with empirical models, system design issues and performance metrics concerning the MCTs. Next, we introduce an exhaustive taxonomy of the MCTs based on various design attributes and then review the literature by categorizing it into periodic and on-demand charging techniques. In addition, we compare the state-of-the-art MCTs in terms of objectives, constraints, solution approaches, charging options, design issues, performance metrics, evaluation methods, and limitations. Finally, we highlight some potential directions for future research.
A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The latest 5G mobile networks have enabled many exciting Internet of Things (IoT) applications that employ unmanned aerial vehicles (UAVs/drones). The success of most UAV-based IoT applications is heavily dependent on artificial intelligence (AI) technologies, for instance, computer vision and path planning. These AI methods must process data and provide decisions while ensuring low latency and low energy consumption. However, the existing cloud-based AI paradigm finds it difficult to meet these strict UAV requirements. Edge AI, which runs AI on-device or on edge servers close to users, can be suitable for improving UAV-based IoT services. This article provides a comprehensive analysis of the impact of edge AI on key UAV technical aspects (i.e., autonomous navigation, formation control, power management, security and privacy, computer vision, and communication) and applications (i.e., delivery systems, civil infrastructure inspection, precision agriculture, search and rescue (SAR) operations, acting as aerial wireless base stations (BSs), and drone light shows). As guidance for researchers and practitioners, this article also explores UAV-based edge AI implementation challenges, lessons learned, and future research directions.
A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale synthetic traffic image datasets have been widely used to make compensate for the insufficient data in real world. However, the mismatch in domain distribution between synthetic datasets and real datasets hinders the application of the synthetic dataset in the actual vision system of intelligent vehicles. In this paper, we propose a novel synthetic-to-real domain adaptation method to settle the mismatch domain distribution from two aspects, i.e., data level and knowledge level. On the data level, a Style-Content Discriminated Data Recombination (SCD-DR) module is proposed, which decouples the style from content and recombines style and content from different domains to generate a hybrid domain as a transition between synthetic and real domains. On the knowledge level, a novel Iterative Cross-Domain Knowledge Transferring (ICD-KT) module including source knowledge learning, knowledge transferring and knowledge refining is designed, which achieves not only effective domain-invariant feature extraction, but also transfers the knowledge from labeled synthetic images to unlabeled actual images. Comprehensive experiments on public virtual and real dataset pairs demonstrate the effectiveness of our proposed synthetic-to-real domain adaptation approach in object detection of traffic scenes.
RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are constantly increasing in complexity, but to remain competitive, their design and testing cycles must be kept as short as possible. This trend inevitably leads to design errors that eventually make their way into commercial products. Major microprocessor vendors such as Intel and AMD regularly publish and update errata documents describing these errata after their microprocessors are launched. The abundance of errata suggests the presence of significant gaps in the design testing of modern microprocessors. We argue that while a specific erratum provides information about only a single issue, the aggregated information from the body of existing errata can shed light on existing design testing gaps. Unfortunately, errata documents are not systematically structured. We formalize that each erratum describes, in human language, a set of triggers that, when applied in specific contexts, cause certain observations that pertain to a particular bug. We present RemembERR, the first large-scale database of microprocessor errata collected among all Intel Core and AMD microprocessors since 2008, comprising 2,563 individual errata. Each RemembERR entry is annotated with triggers, contexts, and observations, extracted from the original erratum. To generalize these properties, we classify them on multiple levels of abstraction that describe the underlying causes and effects. We then leverage RemembERR to study gaps in design testing by making the key observation that triggers are conjunctive, while observations are disjunctive: to detect a bug, it is necessary to apply all triggers and sufficient to observe only a single deviation. Based on this insight, one can rely on partial information about triggers across the entire corpus to draw consistent conclusions about the best design testing and validation strategies to cover the existing gaps. As a concrete example, our study shows that we need testing tools that exert power level transitions under MSR-determined configurations while operating custom features.
Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Efficiency in Iron Ore Sintering Process A key energy consumption in steel metallurgy comes from an iron ore sintering process. Enhancing carbon utilization in this process is important for green manufacturing and energy saving and its prerequisite is a time-series prediction of carbon efficiency. The existing carbon efficiency models usually have a complex structure, leading to a time-consuming training process. In addition, a complete retraining process will be encountered if the models are inaccurate or data change. Analyzing the complex characteristics of the sintering process, we develop an original prediction framework, that is, a weighted kernel-based fuzzy C-means (WKFCM)-based broad learning model (BLM), to achieve fast and effective carbon efficiency modeling. First, sintering parameters affecting carbon efficiency are determined, following the sintering process mechanism. Next, WKFCM clustering is first presented for the identification of multiple operating conditions to better reflect the system dynamics of this process. Then, the BLM is built under each operating condition. Finally, a nearest neighbor criterion is used to determine which BLM is invoked for the time-series prediction of carbon efficiency. Experimental results using actual run data exhibit that, compared with other prediction models, the developed model can more accurately and efficiently achieve the time-series prediction of carbon efficiency. Furthermore, the developed model can also be used for the efficient and effective modeling of other industrial processes due to its flexible structure.
SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heterogeneous MEC Network Integrating device-to-device (D2D) cooperation with mobile edge computing (MEC) for computation offloading has proven to be an effective method for extending the system capabilities of low-end devices to run complex applications. This can be realized through efficient computing data offloading and yet enhanced while simultaneously using multiple wireless interfaces for D2D, MEC and cloud offloading. In this work, we propose user-centric real-time computation task offloading and resource allocation strategies aiming at minimizing energy consumption and monetary cost while maximizing the number of completed tasks. We develop dynamic partial offloading solutions using the Lyapunov drift-plus-penalty optimization approach. Moreover, we propose a task admission solution based on support vector machines (SVM) to assess the potential of a task to be completed within its deadline, and accordingly, decide whether to drop from or add it to the user’s queue for processing. Results demonstrate high performance gains of the proposed solution that employs SVM-based task admission and Lyapunov-based computation offloading strategies. Significant increase in number of completed tasks, energy savings, and cost reductions are resulted as compared to alternative baseline approaches.
An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture is unlikely to cope with Ultra-Reliable Low-Latency Communications (URLLC) constraints, being a major cause for its fundamentals to remain elusive. Multi-access Edge Computing (MEC) and Network Function Virtualization (NFV) emerge as complementary solutions, offering fine-grained on-demand distributed resources closer to the User Equipment (UE). This work proposes a multipurpose analytical framework that evaluates a hybrid virtual MEC environment that combines VMs and Containers strengths to concomitantly meet URLLC constraints and cloud-like Virtual Network Functions (VNF) elasticity.
Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving Collaborative driving can significantly reduce the computation offloading from autonomous vehicles (AVs) to edge computing devices (ECDs) and the computation cost of each AV. However, the frequent information exchanges between AVs for determining the members in each collaborative group will consume a lot of time and resources. In addition, since AVs have different computing capabilities and costs, the collaboration types of the AVs in each group and the distribution of the AVs in different collaborative groups directly affect the performance of the cooperative driving. Therefore, how to develop an efficient collaborative autonomous driving scheme to minimize the cost for completing the driving process becomes a new challenge. To this end, we regard collaboration as a service and propose a digital twins (DT)-based scheme to facilitate the collaborative and distributed autonomous driving. Specifically, we first design the DT for each AV and develop a DT-enabled architecture to help AVs make the collaborative driving decisions in the virtual networks. With this architecture, an auction game-based collaborative driving mechanism (AG-CDM) is then designed to decide the head DT and the tail DT of each group. After that, by considering the computation cost and the transmission cost of each group, a coalition game-based distributed driving mechanism (CG-DDM) is developed to decide the optimal group distribution for minimizing the driving cost of each DT. Simulation results show that the proposed scheme can converge to a Nash stable collaborative and distributed structure and can minimize the autonomous driving cost of each AV.
Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following model was proposed based on deep reinforcement learning.•It uses speed deviations as reward function and considers a reaction delay of 1 s.•Deep deterministic policy gradient algorithm was used to optimize the model.•The model outperformed traditional and recent data-driven car-following models.•The model demonstrated good capability of generalization.
Relay-Assisted Cooperative Federated Learning Federated learning (FL) has recently emerged as a promising technology to enable artificial intelligence (AI) at the network edge, where distributed mobile devices collaboratively train a shared AI model under the coordination of an edge server. To significantly improve the communication efficiency of FL, over-the-air computation allows a large number of mobile devices to concurrently upload their local models by exploiting the superposition property of wireless multi-access channels. Due to wireless channel fading, the model aggregation error at the edge server is dominated by the weakest channel among all devices, causing severe straggler issues. In this paper, we propose a relay-assisted cooperative FL scheme to effectively address the straggler issue. In particular, we deploy multiple half-duplex relays to cooperatively assist the devices in uploading the local model updates to the edge server. The nature of the over-the-air computation poses system objectives and constraints that are distinct from those in traditional relay communication systems. Moreover, the strong coupling between the design variables renders the optimization of such a system challenging. To tackle the issue, we propose an alternating-optimization-based algorithm to optimize the transceiver and relay operation with low complexity. Then, we analyze the model aggregation error in a single-relay case and show that our relay-assisted scheme achieves a smaller error than the one without relays provided that the relay transmit power and the relay channel gains are sufficiently large. The analysis provides critical insights on relay deployment in the implementation of cooperative FL. Extensive numerical results show that our design achieves faster convergence compared with state-of-the-art schemes.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers' performance on a non-driving-related task. We conducted a study (n=80) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers' trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers' trust levels to mitigate both undertrust and overtrust.
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
"The Complexity of Flowshop and Jobshop Scheduling NP-complete problems form an extensive equivalenc(...TRUNCATED)
"Review and Perspectives on Driver Digital Twin and Its Enabling Technologies for Intelligent Vehicl(...TRUNCATED)
"A Survey on Mobile Charging Techniques in Wireless Rechargeable Sensor Networks The recent breakthr(...TRUNCATED)
"A Survey on the Convergence of Edge Computing and AI for UAVs: Opportunities and Challenges The lat(...TRUNCATED)
"A Parallel Teacher for Synthetic-to-Real Domain Adaptation of Traffic Object Detection Large-scale (...TRUNCATED)
"RemembERR: Leveraging Microprocessor Errata for Design Testing and Validation Microprocessors are c(...TRUNCATED)
"Weighted Kernel Fuzzy C-Means-Based Broad Learning Model for Time-Series Prediction of Carbon Effic(...TRUNCATED)
"SVM-Based Task Admission Control and Computation Offloading Using Lyapunov Optimization in Heteroge(...TRUNCATED)
"An analytical framework for URLLC in hybrid MEC environments The conventional mobile architecture i(...TRUNCATED)
"Collaboration as a Service: Digital-Twin-Enabled Collaborative and Distributed Autonomous Driving C(...TRUNCATED)
"Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning. •A car-following mode(...TRUNCATED)
"Deep Residual Learning for Image Recognition Deeper neural networks are more difficult to train. We(...TRUNCATED)
"Tetris: re-architecting convolutional neural network computation for machine learning accelerators (...TRUNCATED)
"Real-Time Estimation of Drivers' Trust in Automated Driving Systems Trust miscalibration issues, re(...TRUNCATED)
1
0.001823
0.001121
0.000746
0.000574
0.000477
0.000351
0.000269
0.000158
0.00008
0.000058
0.000049
0.000043
0.000042
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
3
Edit dataset card