Query Text
stringlengths
10
40.4k
Ranking 1
stringlengths
12
40.4k
Ranking 2
stringlengths
12
36.2k
Ranking 3
stringlengths
10
36.2k
Ranking 4
stringlengths
13
40.4k
Ranking 5
stringlengths
12
36.2k
Ranking 6
stringlengths
13
36.2k
Ranking 7
stringlengths
10
40.4k
Ranking 8
stringlengths
12
36.2k
Ranking 9
stringlengths
12
36.2k
Ranking 10
stringlengths
12
36.2k
Ranking 11
stringlengths
20
6.21k
Ranking 12
stringlengths
14
8.24k
Ranking 13
stringlengths
28
4.03k
score_0
float64
1
1.25
score_1
float64
0
0.25
score_2
float64
0
0.25
score_3
float64
0
0.25
score_4
float64
0
0.25
score_5
float64
0
0.25
score_6
float64
0
0.25
score_7
float64
0
0.24
score_8
float64
0
0.2
score_9
float64
0
0.03
score_10
float64
0
0
score_11
float64
0
0
score_12
float64
0
0
score_13
float64
0
0
Real-time correlation for locating systems utilizing heterogeneous computing architectures The usage of locating systems in sports (e.g. soccer) elevates match and training analysis to a new level. By tracking players and balls during matches or training, the performance of players can be analyzed, the training can be adapted and new strategies can be developed. The radio-based RedFIR system equips players and the ball with miniaturized transmitters, while antennas distributed around the playing field receive the transmitted radio signals. A cluster computer processes these signals to determine the exact positions based on the signals' Time Of Arrival (TOA) at the back end. While such a system works well, it is neither scalable nor inexpensive due to the required computing cluster. Also the relatively high power consumption of the GPU-based cluster is suboptimal. Moreover, high speed interconnects between the antennas and the cluster computers introduce additional costs and increase the installation effort. However, a significant portion of the computing performance is not required for the synthesis of the received data, but for the calculation of the unique TOA values of every receiver line. Therefore, in this paper we propose a smart sensor approach: By integrating some intelligence into the antenna (smart antenna), each antenna can correlate the received signal independently of the remaining system and only TOA values are send to the backend. While the idea is quite simple, the question of a well suited computer architecture to fulfill this task inside the smart antenna is more complex. Therefore, we are evaluating embedded architectures, such as FPGAs, ARM cores as well as a many core CPU (Epiphany) for this approach. Thereby, we are able to achieve 50.000 correlations per second in each smart antenna. As a result, the backend becomes lightweight, cheaper interconnects through data reduction are required and the system becomes more scalable, since most processing power is already included in the antenna.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A 90 nm CMOS 16 Gb/s Transceiver for Optical Interconnects Interconnect architectures which leverage high-bandwidth optical channels offer a promising solution to address the increasing chip-to-chip I/O bandwidth demands. This paper describes a dense, high-speed, and low-power CMOS optical interconnect transceiver architecture. Vertical-cavity surface-emitting laser (VCSEL) data rate is extended for a given average current and corresponding reliability le...
Design Of Automotive Vcsel Transmitter With On-Chip Feedforward Optical Power Control We propose a novel 50 Mb/s optical transmitter fabricated in a 0.6 mu m BiCMOS technology for automotive applications. The proposed VCSEL driver chip was designed to operate with a single Supply voltage ranging from 3.0 V to 5.25 V. A fully integrated feedforward current control circuit is presented to stabilize the optical output power without any external components. The experimental results show that the optical output power can be stable within a 1.1 dB range and the extinction ratio greater than 14 dB over the automotive environmental temperature range of -40 degrees C to 105 degrees C.
Design of a 56 Gbit/s 4-level pulse-amplitude-modulation inductor-less vertical-cavity surface-emitting laser driver integrated circuit in 130 nm BiCMOS technology This paper presents the design and analysis of a 4-level pulse-amplitude-modulation (4-PAM) 56 Gbit/s vertical-cavity surface-emitting laser (VCSEL) driver integrated circuit (IC) for short range, high speed and low power optical interconnections. An amplitude modulated signal is necessary to overcome the bottleneck of speed given by the actual VCSELs and decrease the power consumption per bit. A prototype IC is developed in a standard 130 nm BiCMOS technology. The circuit converts two single-ended input signals to a 4-level signal fed to the laser. The driver also provides the DC current and the voltage necessary to bias the VCSEL. The power dissipation of the driver is only 115 mW including both the VCSEL and the 50 Ω input single-to-differential-ended converters. To the author's knowledge this is the first 56 Gbit/s 4-PAM laser driver implemented in silicon with a power dissipation per data-rate (DR) of 2.05 mW/Gbit/s including the VCSEL making it the most power efficient, 56 Gbit/s, common cathode laser driver. The active area occupies 0.056 mm2. The small signal bandwidths are 49 GHz for the high and 43 GHz for the low amplitude amplification path, when the VCSEL is not connected. The bit error rate was tested electrically showing and error free connection at 28 GBaud/s.
Survey of Photonic and Plasmonic Interconnect Technologies for Intra-Datacenter and High-Performance Computing Communications. Large scale data centers (DC) and high performance computing (HPC) systems require more and more computing power at higher energy efficiency. They are already consuming megawatts of power, and a linear extrapolation of trends reveals that they may eventually lead to unrealistic power consumption scenarios in order to satisfy future requirements (e.g., Exascale computing). Conventional complementar...
A Differential Push-Pull Voltage Mode VCSEL Driver in 65-nm CMOS Improving power-conversion efficiency (PCE) of VCSEL drivers is paramount to improve the overall energy efficiency of the entire optical link for high-performance computing and datacenters. VCSEL diodes are normally driven single-ended with pseudo-differential current-mode drivers to maintain signal integrity. However, such conventional drivers consume significant power and are often unable to compensate for supply switching noise due to package parasitics at high data-rates. We propose a differential push-pull voltage-mode VCSEL driver to mitigate bondwire parasitics, reduce power consumption, and leverage CMOS process scaling to its maximum advantage. A proof-of-concept prototype in 65-nm CMOS process achieves the highest ever-reported PCE of 18.7 % for VCSEL drivers when normalized to VCSEL slope efficiency. It uses an asymmetric 3-tap rise and fall-based pre-emphasis to achieve a total energy/bit of 1.52 pJ/b at 16 Gb/s with an average optical power output of 1.34 dBm, OMA of 2.1 dBm, and extinction ratio of 5.92 dB.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Cognitive radio: brain-empowered wireless communications Cognitive radio is viewed as a novel approach for improving the utilization of a precious natural resource: the radio electromagnetic spectrum. The cognitive radio, built on a software-defined radio, is defined as an intelligent wireless communication system that is aware of its environment and uses the methodology of understanding-by-building to learn from the environment and adapt to statistical variations in the input stimuli, with two primary objectives in mind: · highly reliable communication whenever and wherever needed; · efficient utilization of the radio spectrum. Following the discussion of interference temperature as a new metric for the quantification and management of interference, the paper addresses three fundamental cognitive tasks. 1) Radio-scene analysis. 2) Channel-state estimation and predictive modeling. 3) Transmit-power control and dynamic spectrum management. This work also discusses the emergent behavior of cognitive radio.
Planning as heuristic search In the AIPS98 Planning Contest, the hsp planner showed that heuristic search planners can be competitive with state-of-the-art Graphplan and sat planners. Heuristic search planners like hsp transform planning problems into problems of heuristic search by automatically extracting heuristics from Strips encodings. They differ from specialized problem solvers such as those developed for the 24-Puzzle and Rubik's Cube in that they use a general declarative language for stating problems and a general mechanism for extracting heuristics from these representations. In this paper, we study a family of heuristic search planners that are based on a simple and general heuristic that assumes that action preconditions are independent. The heuristic is then used in the context of best-first and hill-climbing search algorithms, and is tested over a large collection of domains. We then consider variations and extensions such as reversing the direction of the search for speeding node evaluation, and extracting information about propositional invariants for avoiding dead-ends. We analyze the resulting planners, evaluate their performance, and explain when they do best. We also compare the performance of these planners with two state-of-the-art planners, and show that the simplest planner based on a pure best-first search yields the most solid performance over a large set of problems. We also discuss the strengths and limitations of this approach, establish a correspondence between heuristic search planning and Graphplan, and briefly survey recent ideas that can reduce the current gap in performance between general heuristic search planners and specialized solvers.
Probabilistic neural networks By replacing the sigmoid activation function often used in neural networks with an exponential function, a probabilistic neural network (PNN) that can compute nonlinear decision boundaries which approach the Bayes optimal is formed. Alternate activation functions having similar properties are also discussed. A fourlayer neural network of the type proposed can map any input pattern to any number of classifications. The decision boundaries can be modified in real-time using new data as they become available, and can be implemented using artificial hardware “neurons” that operate entirely in parallel. Provision is also made for estimating the probability and reliability of a classification as well as making the decision. The technique offers a tremendous speed advantage for problems in which the incremental adaptation time of back propagation is a significant fraction of the total computation time. For one application, the PNN paradigm was 200,000 times faster than back-propagation.
TaintDroid: An Information-Flow Tracking System for Realtime Privacy Monitoring on Smartphones Today’s smartphone operating systems frequently fail to provide users with visibility into how third-party applications collect and share their private data. We address these shortcomings with TaintDroid, an efficient, system-wide dynamic taint tracking and analysis system capable of simultaneously tracking multiple sources of sensitive data. TaintDroid enables realtime analysis by leveraging Android’s virtualized execution environment. TaintDroid incurs only 32% performance overhead on a CPU-bound microbenchmark and imposes negligible overhead on interactive third-party applications. Using TaintDroid to monitor the behavior of 30 popular third-party Android applications, in our 2010 study we found 20 applications potentially misused users’ private information; so did a similar fraction of the tested applications in our 2012 study. Monitoring the flow of privacy-sensitive data with TaintDroid provides valuable input for smartphone users and security service firms seeking to identify misbehaving applications.
On receding horizon feedback control Receding horizon feedback control (RHFC) was originally introduced as an easy method for designing stable state-feedback controllers for linear systems. Here those results are generalized to the control of nonlinear autonomous systems, and we develop a performance index which is minimized by the RHFC (inverse optimal control problem). Previous results for linear systems have shown that desirable nonlinear controllers can be developed by making the RHFC horizon distance a function of the state. That functional dependence was implicit and difficult to implement on-line. Here we develop similar controllers for which the horizon distance is an easily computed explicit function of the state.
Sensor network gossiping or how to break the broadcast lower bound Gossiping is an important problem in Radio Networks that has been well studied, leading to many important results. Due to strong resouce limitations of sensor nodes, previous solutions are frequently not feasible in Sensor Networks. In this paper, we study the gossiping problem in the restrictive context of Sensor Networks. By exploiting the geometry of sensor node distributions, we present reduced, optimal running time of O(D + Δ) for an algorithm that completes gossiping with high probability in a Sensor Network of unknown topology and adversarial wake-up, where D is the diameter and Δ the maximum degree of the network. Given that an algorithm for gossiping also solves the broadcast problem, our result proves that the classic lower bound of [16] can be broken if nodes are allowed to do preprocessing.
20.3 A feedforward controlled on-chip switched-capacitor voltage regulator delivering 10W in 32nm SOI CMOS On-chip (or fully integrated) switched-capacitor (SC) voltage regulators (SCVR) have recently received a lot of attention due to their ease of monolithic integration. The use of deep trench capacitors can lead to SCVR implementations that simultaneously achieve high efficiency, high power density, and fast response time. For the application of granular power distribution of many-core microprocessor systems, the on-chip SCVR must maintain an output voltage above a certain minimum level Uout, min in order for the microprocessor core to meet setup time requirements. Following a transient load change, the output voltage typically exhibits a droop due to parasitic inductances and resistances in the power distribution network. Therefore, the steady-state output voltage is kept high enough to ensure VOUT >Vout, min at all times, thereby introducing an output voltage overhead that leads to increased system power consumption. The output voltage droop can be reduced by implementing fast regulation and a sufficient amount of on-chip decoupling capacitance. However, a large amount of on-chip decoupling capacitance is needed to significantly reduce the droop, and it becomes impractical to implement owing to the large chip area overhead required. This paper presents a feedforward control scheme that significantly reduces the output voltage droop in the presence of a large input voltage droop following a transient event. This in turn reduces the required output voltage overhead and may lead to significant overall system power savings.
A VCO-Based Nonuniform Sampling ADC Using a Slope-Dependent Pulse Generator This paper presents a voltage-controlled oscillator (VCO)-based nonuniform sampling analog-to-digital converter (ADC) as an alternative to the level-crossing (LC)-based converters for digitizing biopotential signals. This work aims to provide a good signal-to-noise-and-distortion ratio at a low average sampling rate. In the proposed conversion method, a slope-dependent pulse generation block is used to provide a variable sample rate adjusted according to the input signal's slope. Simulation results show that the introduced method meets a target reconstruction quality with a sampling rate approaching 92 Sps, while on the same MIT-BIH Arrhythmia N 106 ECG benchmark, the classic LC-based approach requires a sampling rate higher than 500 Sps. The benefits of the proposed method are more remarkable when the input signal is very noisy. The proposed ADC achieves a compression ratio close to 4, but with only 5.4% root-mean-square difference when tested using the MIT-BIH Arrhythmia Database.
1.1
0.1
0.1
0.1
0.1
0
0
0
0
0
0
0
0
0
Direct bandpass sampling of multiple distinct RF signals A goal in the software radio design philosophy is to place the analog-to-digital converter as near the antenna as possible. This objective has been demonstrated for the case of a single input signal. Bandpass sampling has been applied to downconvert, or intentionally alias, the information bandwidth of a radio frequency (RF) signal to a desired intermediate frequency. The design of the software radio becomes more interesting when two or more distinct signals are received. The traditional ap- proach for multiple signals would be to bandpass sample a continuous span of spectrum containing all the desired signals. The disadvantage with this approach is that the sampling rate and associated discrete processing rate are based on the span of spectrum as opposed to the information bandwidths of the signals of interest. Proposed here is a technique to determine the absolute min- imum sampling frequency for direct digitization of multiple, nonadjacent, frequency bands. The entire process is based on the calculation of a single parameter—the sampling frequency. The result is a simple, yet elegant, front-end design for the reception and bandpass sampling of multiple RF signals. Experimental results using RF transmissions from the U.S. Global Positioning System—Standard Position Service (GPS-SPS) and the Russian Global Navigation Satellite System (GLONASS) are used to illustrate and verify the theory.
New Architecture for a Wireless Smart Sensor Based on a Software-Defined Radio. Today, wireless sensor technology is based on monolithic transceivers that optimize cost but have a rigid hardware architecture. In this paper, a new architecture for wireless sensors is presented. It is based on a software-defined radio concept and shows impressive adaptability to external conditions. The proposed architecture, which is called the wireless ultrasmart sensor (WUSS), enables the us...
RF Front-End Concept and Implementation for Direct Sampling of Multiband Signals. The placement of the analog-to-digital converter as near the antenna as possible is a key issue in the software-defined radio receiver design. Direct sampling of the incoming filtered signal is a compact solution enabling channel simultaneity. In this brief, in the context of evenly spaced equal-bandwidth multiband systems, sufficient conditions for the channel allocation assuring that the minimum...
LC-Based Bandpass Continuous-Time Sigma-Delta Modulators With Widely Tunable Notch Frequency This paper analyses the use of bandpass continuous-time ΣΔ modulators with widely programmable notch frequency for the efficient digitization of radio-frequency signals in the next generation of software-defined-radio mobile systems. The modulator architectures under study are based on a fourth-order loop filter - implemented with two LC-based resonators - and a finite-impulsive-response feedback loop in order to increase their flexibility and degrees of freedom. Several topologies are studied, considering three different cases for the embedded digital-to-analog converter, namely: return-to-zero, non-return-to-zero and raised-cosine waveform. In all cases, a notch-aware synthesis methodology is presented, which takes into account the dependency of the loop-filter coefficients on the notch frequency and compensates for the dynamic range degradation due to the variation of the notch. The synthesized modulators are compared in terms of their sensitivity to main circuit error mechanisms and the estimated power consumption over a notch-frequency tuning range of 0.1fs to 0.4fs. Time-domain behavioral and macromodel electrical simulations validate this approach, demonstrating the feasibility of the presented methodology and architectures for the efficient and robust digitization of radio-frequency signals with a scalable resolution and programmable signal bandwidth.
The Design Method and Performance Analysis of RF Subsampling Frontend for SDR/CR Receivers RF subsampling can be used by radio receivers to directly down convert and digitize RF signals. The goal of software-defined radio (SDR) design is to place analog-to-digital converter (ADC) as near the antenna as possible. Based on this, an RF subsampling frontend (FE) for SDR is designed and verified by a hardware platform. The effects of timing jitter, ADC resolution, and folding noise dominating SNR degradation sources in the digital FE were considered. We present an efficient method of SNR measurement and an analysis of its performance. The experimental results indicate that the three degradation sources are sufficient to estimate the performance of the RF subsampling FE, and this conclusion matches the theoretical analysis results.
All-digital TX frequency synthesizer and discrete-time receiver for Bluetooth radio in 130-nm CMOS We present a single-chip fully compliant Bluetooth radio fabricated in a digital 130-nm CMOS process. The transceiver is architectured from the ground up to be compatible with digital deep-submicron CMOS processes and be readily integrated with a digital baseband and application processor. The conventional RF frequency synthesizer architecture, based on the voltage-controlled oscillator and the ph...
A Second-Order Antialiasing Prefilter for a Software-Defined Radio Receiver A new architecture is presented for a sinc2(f) filter intended to sample channels of varying bandwidth when surrounded by blockers and adjacent bands. The sample rate is programmable from 5 to 40 MHz, and aliases are suppressed by 45 dB or more. The noise and linearity performance of the filter is analyzed, and the effects of various imperfections such as transconductor finite output impedance, interchannel gain mismatch, and residual offsets in the channels are studied. Furthermore, it is proved that the filter is robust to the clock jitter. The 0.13- mum CMOS circuit consumes 6 mA from a 1.2-V supply.
Second-order intermodulation mechanisms in CMOS downconverters An in-depth analysis of the mechanisms responsible for second-order intermodulation distortion in CMOS active downconverters is proposed in this paper. The achievable second-order input intercept point (IIP2) has a fundamental limit due to nonlinearity and mismatches in the switching stage and improves with technology scaling. Second-order intermodulation products generated by the input transcondu...
Track-and-Zoom Neural Analog-to-Digital Converter With Blind Stimulation Artifact Rejection Closed-loop neuromodulation for the treatment of neurological disorders requires monitoring of the brain activity uninterruptedly even during neurostimulation. This article presents a bidirectional 32-channel CMOS neural interface that can record neural activity during stimulation. Each channel consists of a dc-coupled <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\Delta ^{2} \Sigma $ </tex-math></inline-formula> -modulated analog-to-digital converter (neural-ADC), which records slow potentials (< 0.1 Hz) while accommodating rail-to-rail dc offset using a spectrum-shaping front-end. This front-end equalizes the neural signal spectrum before signal quantization, which reduces the energy consumption and silicon area. Upon detection of a large artifact by an in-channel event-triggered digital block, the modulator feedback DAC tracks the artifact with step sizes incrementing in a radix-2 exponential form, preventing the neural-ADC from saturation. Upon tracking the artifact, the multi-bit DAC step size is reduced to zoom into the input neural signal at the highest recording resolution. The modulator’s multi-bit DAC is reused in a time-shared fashion as a current-mode stimulator with no area overhead. The <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\Delta ^{2} \Sigma $ </tex-math></inline-formula> -ADC consumes 1.7 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> from 0.6-V/1.2-V digital/analog supplies and time-shares the modulator’s feedback DAC as the multi-bit current-mode stimulator operating at 3.3 V. The ADC occupies a silicon area of 0.023 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> in the 130-nm CMOS and achieves a signal-to-noise-and-distortion ratio (SNDR) of 70 dB over the 500-Hz bandwidth and an equivalent noise efficiency factor (NEF) of 2.86 without a stand-alone front-end amplifier. The 32-channel bidirectionally interfacing prototype is validated in the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in vivo</italic> whole brain of a rodent.
Computing size-independent matrix problems on systolic array processors A methodology to transform dense to band matrices is presented in this paper. This transformation, is accomplished by triangular blocks partitioning, and allows the implementation of solutions to problems with any given size, by means of contraflow systolic arrays, originally proposed by H.T. Kung. Matrix-vector and matrix-matrix multiplications are the operations considered here.The proposed transformations allow the optimal utilization of processing elements (PEs) of the systolic array when dense matrix are operated. Every computation is made inside the array by using adequate feedback. The feedback delay time depends only on the systolic array size.
A tight lower bound on the cover time for random walks on graphs We prove that the expected time for a random walk to cover all n vertices of a graph is at least (1 + o(1))n In n. © 1995 Wiley Periodicals, Inc.
Decision making for cognitive radio equipment: analysis of the first 10 years of exploration. This article draws a general retrospective view on the first 10 years of cognitive radio (CR). More specifically, we explore in this article decision making and learning for CR from an equipment perspective. Thus, this article depicts the main decision making problems addressed by the community as general dynamic configuration adaptation (DCA) problems and discuss the suggested solution proposed in the literature to tackle them. Within this framework dynamic spectrum management is briefly introduced as a specific instantiation of DCA problems. We identified, in our analysis study, three dimensions of constrains: the environment's, the equipment's and the user's related constrains. Moreover, we define and use the notion of a priori knowledge, to show that the tackled challenges by the radio community during first 10 years of CR to solve decision making problems have often the same design space, however they differ by the a priori knowledge they assume available. Consequently, we suggest in this article, the "a priori knowledge" as a classification criteria to discriminate the main proposed techniques in the literature to solve configuration adaptation decision making problems. We finally discuss the impact of sensing errors on the decision making process as a prospective analysis.
Implementation of LTE SC-FDMA on the USRP2 software defined radio platform In this paper we discuss the implementation of a Single Carrier Frequency Division Multiple Access (SC-FDMA) transceiver running over the Universal Software Radio Peripheral 2 (USRP2). SC-FDMA is the air interface which has been selected for the uplink in the latest Long Term Evolution (LTE) standard. In this paper we derive an AWGN channel model for SC-FDMA transmission, which is useful for benchmarking experimental results. In our implementation, we deal with signal scaling, equalization and partial synchronization to realize SC-FDMA transmission over a noisy channel at rates up to 5.184 Mbit/s. Experimental results on the Bit Error Rate (BER) versus Signal-to-Noise Ratio (SNR) are presented and compared to theoretical and simulated performance.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.009355
0.009167
0.008333
0.008333
0.004167
0.001367
0.000082
0.000013
0.000001
0
0
0
0
0
A comprehensive review on type 2 fuzzy logic applications: Past, present and future In this paper a concise overview of the work that has been done by various researchers in the area of type-2 fuzzy logic is analyzed and discussed. Type-2 fuzzy systems have been widely applied in the fields of intelligent control, pattern recognition and classification, among others. The overview mainly focuses on past, present and future trends of type-2 fuzzy logic applications. Of utmost importance is the last part, outlining possible areas of applied research in type-2 FL in the future. The major contribution of the paper is briefing of the most relevant work in the area of type-2 fuzzy logic, including its theoretical and practical implications. As well as envisioning possible future works and trends in this area of research. We believe that this paper will provide a good platform for people interested in this area for their future research work.
Type-2 Fuzzy Sets and Systems: An Overview [corrected reprint] As originally published in the February 2007 issue of IEEE Computational Intelligence Magazine, the above titled paper (ibid., vol. 2, no. 1, pp. 20-29, Feb 07) contained errors in mathematics that were introduced by the publisher. The corrected version is reprinted in its entirety.
Stability of switched positive linear systems with average dwell time switching. In this paper, the stability analysis problem for a class of switched positive linear systems (SPLSs) with average dwell time switching is investigated. A multiple linear copositive Lyapunov function (MLCLF) is first introduced, by which the sufficient stability criteria in terms of a set of linear matrix inequalities, are given for the underlying systems in both continuous-time and discrete-time contexts. The stability results for the SPLSs under arbitrary switching, which have been previously studied in the literature, can be easily obtained by reducing MLCLF to the common linear copositive Lyapunov function used for the system under arbitrary switching those systems. Finally, a numerical example is given to show the effectiveness and advantages of the proposed techniques.
Output tracking control for a class of continuous-time T-S fuzzy systems This paper investigates the problem of output tracking for nonlinear systems with actuator fault using interval type-2 (IT2) fuzzy model approach. An IT2 state-feedback fuzzy controller is designed to perform the tracking control problem, where the membership functions can be freely chosen since the number of fuzzy rules is different from that of the IT2 T-S fuzzy model. Based on Lyapunov stability theory, an existence condition of IT2 fuzzy H ∞ output tracking controller is obtained to guarantee that the output of the closed-loop IT2 control system can track the output of a given reference model well in the H ∞ sense. Finally, two illustrative examples are given to demonstrate the effectiveness and merits of the proposed design techniques.
Adaptive Fault-Tolerant Tracking Control for Discrete-Time Multiagent Systems via Reinforcement Learning Algorithm This article investigates the adaptive fault-tolerant tracking control problem for a class of discrete-time multiagent systems via a reinforcement learning algorithm. The action neural networks (NNs) are used to approximate unknown and desired control input signals, and the critic NNs are employed to estimate the cost function in the design procedure. Furthermore, the direct adaptive optimal controllers are designed by combining the backstepping technique with the reinforcement learning algorithm. Comparing the existing reinforcement learning algorithm, the computational burden can be effectively reduced by using the method of less learning parameters. The adaptive auxiliary signals are established to compensate for the influence of the dead zones and actuator faults on the control performance. Based on the Lyapunov stability theory, it is proved that all signals of the closed-loop system are semiglobally uniformly ultimately bounded. Finally, some simulation results are presented to illustrate the effectiveness of the proposed approach.
Robust fuzzy tracking control for robotic manipulators In this paper, a stable adaptive fuzzy-based tracking control is developed for robot systems with parameter uncertainties and external disturbance. First, a fuzzy logic system is introduced to approximate the unknown robotic dynamics by using adaptive algorithm. Next, the effect of system uncertainties and external disturbance is removed by employing an integral sliding mode control algorithm. Consequently, a hybrid fuzzy adaptive robust controller is developed such that the resulting closed-loop robot system is stable and the trajectory tracking performance is guaranteed. The proposed controller is appropriate for the robust tracking of robotic systems with system uncertainties. The validity of the control scheme is shown by computer simulation of a two-link robotic manipulator.
A Survey of Reachability and Controllability for Positive Linear Systems. This paper is a survey of reachability and controllability results for discrete-time positive linear systems. It presents a variety of criteria in both algebraic and digraph forms for recognising these fundamental system properties with direct implications not only in dynamic optimization problems (such as those arising in inventory and production control, manpower planning, scheduling and other areas of operations research) but also in studying properties of reachable sets, in feedback control problems, and others. The paper highlights the intrinsic combinatorial structure of reachable/controllable positive linear systems and reveals the monomial components of such systems. The system matrix decomposition into monomial components is demonstrated by solving some illustrative examples.
GloMoSim: a library for parallel simulation of large-scale wireless networks Abstract Anumber,of library-based parallel ,and sequential network,simulators ,have ,been ,designed. This paper describes a library, called GloMoSim (for Global Mobile system Simulator), for parallel simulation of wireless networks. GloMoSim has been designed to be ,extensible and composable: the communication ,protocol stack for wireless networks is divided into a set of layers, each with its own API. Models of protocols at one layer interact with those at a lower (or higher) layer only via these APIs. The modular,implementation,enables consistent comparison,of multiple,protocols ,at a ,given ,layer. The parallel implementation,of GloMoSim ,can be executed ,using a variety of conservative synchronization protocols, which include,the ,null ,message ,and ,conditional ,event algorithms. This paper describes the GloMoSim library, addresses,a number ,of issues ,relevant ,to its parallelization, and presents a set of experimental results onthe IBM 9076 SP, a distributed memory multi- computer. These experiments use models constructed from the library modules. 1,Introduction The,rapid ,advancement ,in portable ,computing platforms and wireless communication,technology has led tosignificant interest in mobile ,computing ,and mobile networking. Two primary forms of mobile ,computing ,are becoming popular: first, mobile computers continue to heavily use wired network infrastructures.Instead of being hardwired to a single location (or IP address), a computer can,dynamically ,move ,to multiple ,locations ,while maintaining,application transparency. Protocols such as
TAG: a Tiny AGgregation service for ad-hoc sensor networks We present the Tiny AGgregation (TAG) service for aggregation in low-power, distributed, wireless environments. TAG allows users to express simple, declarative queries and have them distributed and executed efficiently in networks of low-power, wireless sensors. We discuss various generic properties of aggregates, and show how those properties affect the performance of our in network approach. We include a performance study demonstrating the advantages of our approach over traditional centralized, out-of-network methods, and discuss a variety of optimizations for improving the performance and fault tolerance of the basic solution.
On the evolution of user interaction in Facebook Online social networks have become extremely popular; numerous sites allow users to interact and share content using social links. Users of these networks often establish hundreds to even thousands of social links with other users. Recently, researchers have suggested examining the activity network - a network that is based on the actual interaction between users, rather than mere friendship - to distinguish between strong and weak links. While initial studies have led to insights on how an activity network is structurally different from the social network itself, a natural and important aspect of the activity network has been disregarded: the fact that over time social links can grow stronger or weaker. In this paper, we study the evolution of activity between users in the Facebook social network to capture this notion. We find that links in the activity network tend to come and go rapidly over time, and the strength of ties exhibits a general decreasing trend of activity as the social network link ages. For example, only 30% of Facebook user pairs interact consistently from one month to the next. Interestingly, we also find that even though the links of the activity network change rapidly over time, many graph-theoretic properties of the activity network remain unchanged.
The Quadrature LC Oscillator: A Complete Portrait Based on Injection Locking We show that the quadrature LC oscillator is best treated as two strongly coupled, nominally identical oscillators that are locked to the same frequency. Differential equations that extend Adler&#39;s description of locking to strong injection reveal the full dynamics of this circuit. With a simplifying insight, the analysis reveals all the modes of the oscillator, their stability, the effects of mism...
Permanent-magnets linear actuators applicability in automobile active suspensions Significant improvements in automobile suspension performance are achieved by active systems. However, current active suspension systems are too expensive and complex. Developments occurring in power electronics, permanent magnet materials, and microelectronic systems justifies analysis of the possibility of implementing electromagnetic actuators in order to improve the performance of automobile suspension systems without excessively increasing complexity and cost. In this paper, the layouts of hydraulic and electromagnetic active suspensions are compared. The actuator requirements are calculated, and some experimental results proving that electromagnetic suspension could become a reality in the future are shown.
SPECS: A Lightweight Runtime Mechanism for Protecting Software from Security-Critical Processor Bugs Processor implementation errata remain a problem, and worse, a subset of these bugs are security-critical. We classified 7 years of errata from recent commercial processors to understand the magnitude and severity of this problem, and found that of 301 errata analyzed, 28 are security-critical. We propose the SECURITY-CRITICAL PROCESSOR ER- RATA CATCHING SYSTEM (SPECS) as a low-overhead solution to this problem. SPECS employs a dynamic verification strategy that is made lightweight by limiting protection to only security-critical processor state. As a proof-of- concept, we implement a hardware prototype of SPECS in an open source processor. Using this prototype, we evaluate SPECS against a set of 14 bugs inspired by the types of security-critical errata we discovered in the classification phase. The evaluation shows that SPECS is 86% effective as a defense when deployed using only ISA-level state; incurs less than 5% area and power overhead; and has no software run-time overhead.
Power Efficiency Comparison of Event-Driven and Fixed-Rate Signal Conversion and Compression for Biomedical Applications Energy-constrained biomedical recording systems need power-efficient data converters and good signal compression in order to meet the stringent power consumption requirements of many applications. In literature today, typically a SAR ADC in combination with digital compression is used. Recently, alternative event-driven sampling techniques have been proposed that incorporate compression in the ADC, such as level-crossing A/D conversion. This paper describes the power efficiency analysis of such level-crossing ADC (LCADC) and the traditional fixed-rate SAR ADC with simple compression. A model for the power consumption of the LCADC is derived, which is then compared to the power consumption of the SAR ADC with zero-order hold (ZOH) compression for multiple biosignals (ECG, EMG, EEG, and EAP). The LCADC is more power efficient than the SAR ADC up to a cross-over point in quantizer resolution (for example 8 bits for an EEG signal). This cross-over point decreases with the ratio of the maximum to average slope in the signal of the application. It also changes with the technology and design techniques used. The LCADC is thus suited for low to medium resolution applications. In addition, the event-driven operation of an LCADC results in fewer data to be transmitted in a system application. The event-driven LCADC without timer and with single-bit quantizer achieves a reduction in power consumption at system level of two orders of magnitude, an order of magnitude better than the SAR ADC with ZOH compression. At system level, the LCADC thus offers a big advantage over the SAR ADC.
1.2
0.2
0.2
0.2
0.2
0.1
0.05
0
0
0
0
0
0
0
Real time front-end for cognitive radio inspired by the human cochlea In this paper we discuss the real time implementation and development of a front-end that is able to sample RF signals with a large bandwidth and dynamic range. This front-end uses an 8 channel RF multiplexer sampled by an 8 channel ADC board. A FPGA board is used to control the ADC and implements the digital synthesis filter bank. Preliminary results show that is possible to reconstruct the input signal.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
An Efficient VLSI Architecture of a Reconfigurable Pulse-Shaping FIR Interpolation This brief proposes a two-step optimization technique for designing a reconfigurable VLSI architecture of an interpolation filter for multistandard digital up converter (DUC) to reduce the power and area consumption. The proposed technique initially reduces the number of multiplications per input sample and additions per input sample by 83% in comparison with individual implementation of each stan...
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Deep Reinforcement Learning-Based Multi-Optimality Routing Scheme For Dynamic Iot Networks With the development of Internet of Things (IoT) and 5G technologies, more and more applications, such as autonomous vehicles and tele-medicine, become more sensitive to network latency and accuracy, which require routing schemes to be more flexible and efficient. In order to meet such urgent need, learning-based routing strategies are emerging as strong candidate solutions, with the advantages of high flexibility and accuracy. These strategies can be divided into two categories, centralized and distributed, enjoying the advantages of high precision and high efficiency, respectively. However, routing becomes more complex in dynamic IoT network, where the link connections and access states are time-varying, hence these learning-based routing mechanisms are required to have the capability to adapt to network changes in real time. In this paper, we designed and implemented both centralized and distributed Reinforcement Learning-based Routing schemes combined with Multi-optimality routing criteria (RLR-M). By conducing a series of experiments, we performed a comprehensive analysis of the results and arrived at the conclusion that the centralized is better suited to cope with dynamic networks due to its faster reconvergence (2.2 x over distributed), while the distributed is better positioned to handle with large-scale networks through its high scalability (1.6 x over centralized). Moreover, the multi-optimality routing scheme is implemented through model fusion, which is more flexible than traditional strategies and as such is better placed to meet the needs of IoT.
Enhancing peer-to-peer content discovery techniques over mobile ad hoc networks Content dissemination over mobile ad hoc networks (MANETs) is usually performed using peer-to-peer (P2P) networks due to its increased resiliency and efficiency when compared to client-server approaches. P2P networks are usually divided into two types, structured and unstructured, based on their content discovery strategy. Unstructured networks use controlled flooding, while structured networks use distributed indexes. This article evaluates the performance of these two approaches over MANETs and proposes modifications to improve their performance. Results show that unstructured protocols are extremely resilient, however they are not scalable and present high energy consumption and delay. Structured protocols are more energy-efficient, however they have a poor performance in dynamic environments due to the frequent loss of query messages. Based on those observations, we employ selective forwarding to decrease the bandwidth consumption in unstructured networks, and introduce redundant query messages in structured P2P networks to increase their success ratio.
Reducing query overhead through route learning in unstructured peer-to-peer network In unstructured peer-to-peer networks, such as Gnutella, peers propagate query messages towards the resource holders by flooding them through the network. This is, however, a costly operation since it consumes node and link resources excessively and often unnecessarily. There is no reason, for example, for a peer to receive a query message if the peer has no matching resource or is not on the path to a peer holding a matching resource. In this paper, we present a solution to this problem, which we call Route Learning, aiming to reduce query traffic in unstructured peer-to-peer networks. In Route Learning, peers try to identify the most likely neighbors through which replies can be obtained to submitted queries. In this way, a query is forwarded only to a subset of the neighbors of a peer, or it is dropped if no neighbor, likely to reply, is found. The scheme also has mechanisms to cope with variations in user submitted queries, like changes in the keywords. The scheme can also evaluate the route for a query for which it is not trained. We show through simulation results that when compared to a pure flooding based querying approach, our scheme reduces bandwidth overhead significantly without sacrificing user satisfaction.
A Trusted Routing Scheme Using Blockchain and Reinforcement Learning for Wireless Sensor Networks. A trusted routing scheme is very important to ensure the routing security and efficiency of wireless sensor networks (WSNs). There are a lot of studies on improving the trustworthiness between routing nodes, using cryptographic systems, trust management, or centralized routing decisions, etc. However, most of the routing schemes are difficult to achieve in actual situations as it is difficult to dynamically identify the untrusted behaviors of routing nodes. Meanwhile, there is still no effective way to prevent malicious node attacks. In view of these problems, this paper proposes a trusted routing scheme using blockchain and reinforcement learning to improve the routing security and efficiency for WSNs. The feasible routing scheme is given for obtaining routing information of routing nodes on the blockchain, which makes the routing information traceable and impossible to tamper with. The reinforcement learning model is used to help routing nodes dynamically select more trusted and efficient routing links. From the experimental results, we can find that even in the routing environment with 50% malicious nodes, our routing scheme still has a good delay performance compared with other routing algorithms. The performance indicators such as energy consumption and throughput also show that our scheme is feasible and effective.
Decentralized Multi-Agent Reinforcement Learning With Networked Agents: Recent Advances Multi-agent reinforcement learning (MARL) has long been a significant research topic in both machine learning and control systems. Recent development of (single-agent) deep reinforcement learning has created a resurgence of interest in developing new MARL algorithms, especially those founded on theoretical analysis. In this paper, we review recent advances on a sub-area of this topic: decentralized MARL with networked agents. In this scenario, multiple agents perform sequential decision-making in a common environment, and without the coordination of any central controller, while being allowed to exchange information with their neighbors over a communication network. Such a setting finds broad applications in the control and operation of robots, unmanned vehicles, mobile sensor networks, and the smart grid. This review covers several of our research endeavors in this direction, as well as progress made by other researchers along the line. We hope that this review promotes additional research efforts in this exciting yet challenging area.
Chord: a scalable peer-to-peer lookup protocol for internet applications A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.
Randomized algorithms This text by two well-known experts in the field presents the basic concepts in the design and analysis of randomized algorithms at a level accessible to beginning graduate students, professionals and researchers.
A Formal Basis for the Heuristic Determination of Minimum Cost Paths Although the problem of determining the minimum cost path through a graph arises naturally in a number of interesting applications, there has been no underlying theory to guide the development of efficient search procedures. Moreover, there is no adequate conceptual framework within which the various ad hoc search strategies proposed to date can be compared. This paper describes how heuristic information from the problem domain can be incorporated into a formal mathematical theory of graph searching and demonstrates an optimality property of a class of search strategies.
Consensus problems in networks of agents with switching topology and time-delays. In this paper, we discuss consensus problems for a network of dynamic agents with flxed and switching topologies. We analyze three cases: i) networks with switching topology and no time-delays, ii) networks with flxed topology and communication time-delays, and iii) max-consensus problems (or leader determination) for groups of discrete-time agents. In each case, we introduce a linear/nonlinear consensus protocol and provide convergence analysis for the proposed distributed algorithm. Moreover, we establish a connection between the Fiedler eigenvalue of the information ∞ow in a network (i.e. algebraic connectivity of the network) and the negotiation speed (or performance) of the corresponding agreement protocol. It turns out that balanced digraphs play an important role in addressing average-consensus problems. We intro- duce disagreement functions that play the role of Lyapunov functions in convergence analysis of consensus protocols. A distinctive feature of this work is to address consen- sus problems for networks with directed information ∞ow. We provide analytical tools that rely on algebraic graph theory, matrix theory, and control theory. Simulations are provided that demonstrate the efiectiveness of our theoretical results.
Gossip-based aggregation in large dynamic networks As computer networks increase in size, become more heterogeneous and span greater geographic distances, applications must be designed to cope with the very large scale, poor reliability, and often, with the extreme dynamism of the underlying network. Aggregation is a key functional building block for such applications: it refers to a set of functions that provide components of a distributed system access to global information including network size, average load, average uptime, location and description of hotspots, and so on. Local access to global information is often very useful, if not indispensable for building applications that are robust and adaptive. For example, in an industrial control application, some aggregate value reaching a threshold may trigger the execution of certain actions; a distributed storage system will want to know the total available free space; load-balancing protocols may benefit from knowing the target average load so as to minimize the load they transfer. We propose a gossip-based protocol for computing aggregate values over network components in a fully decentralized fashion. The class of aggregate functions we can compute is very broad and includes many useful special cases such as counting, averages, sums, products, and extremal values. The protocol is suitable for extremely large and highly dynamic systems due to its proactive structure---all nodes receive the aggregate value continuously, thus being able to track any changes in the system. The protocol is also extremely lightweight, making it suitable for many distributed applications including peer-to-peer and grid computing systems. We demonstrate the efficiency and robustness of our gossip-based protocol both theoretically and experimentally under a variety of scenarios including node and communication failures.
On receding horizon feedback control Receding horizon feedback control (RHFC) was originally introduced as an easy method for designing stable state-feedback controllers for linear systems. Here those results are generalized to the control of nonlinear autonomous systems, and we develop a performance index which is minimized by the RHFC (inverse optimal control problem). Previous results for linear systems have shown that desirable nonlinear controllers can be developed by making the RHFC horizon distance a function of the state. That functional dependence was implicit and difficult to implement on-line. Here we develop similar controllers for which the horizon distance is an easily computed explicit function of the state.
Cross-layer sensors for green cognitive radio. Green cognitive radio is a cognitive radio (CR) that is aware of sustainable development issues and deals with an additional constraint as regards the decision-making function of the cognitive cycle. In this paper, it is explained how the sensors distributed throughout the different layers of our CR model could help on taking the best decision in order to best contribute to sustainable development.
20.3 A feedforward controlled on-chip switched-capacitor voltage regulator delivering 10W in 32nm SOI CMOS On-chip (or fully integrated) switched-capacitor (SC) voltage regulators (SCVR) have recently received a lot of attention due to their ease of monolithic integration. The use of deep trench capacitors can lead to SCVR implementations that simultaneously achieve high efficiency, high power density, and fast response time. For the application of granular power distribution of many-core microprocessor systems, the on-chip SCVR must maintain an output voltage above a certain minimum level Uout, min in order for the microprocessor core to meet setup time requirements. Following a transient load change, the output voltage typically exhibits a droop due to parasitic inductances and resistances in the power distribution network. Therefore, the steady-state output voltage is kept high enough to ensure VOUT >Vout, min at all times, thereby introducing an output voltage overhead that leads to increased system power consumption. The output voltage droop can be reduced by implementing fast regulation and a sufficient amount of on-chip decoupling capacitance. However, a large amount of on-chip decoupling capacitance is needed to significantly reduce the droop, and it becomes impractical to implement owing to the large chip area overhead required. This paper presents a feedforward control scheme that significantly reduces the output voltage droop in the presence of a large input voltage droop following a transient event. This in turn reduces the required output voltage overhead and may lead to significant overall system power savings.
An Energy-Efficient SAR ADC With Event-Triggered Error Correction This brief presents an energy-efficient fully differential 10-bit successive approximation register (SAR) analog-to-digital converter (ADC) with a resolution of 10 bits and a sampling rate of 320 kS/s. The optimal capacitor split and bypass number is analyzed to achieve the highest switching energy efficiency. The common-mode voltage level remains constant during the MSB-capacitor switching cycles. To minimize nonlinearity due to charge averaging voltage offset or DAC array mismatch, an event-triggered error correction method is employed as a redundant cycle for detecting digital code errors within 1 least significant bit (LSB). A test chip was fabricated using the 180-nm CMOS process and occupied a 0.0564-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> core area. Under a regular 0.65-V supply voltage, the ADC achieved an effective number of bits of 9.61 bits and a figure of merit (FOM) of 6.38 fJ/conversion step, with 1.6- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${ \mu }\text{W}$ </tex-math></inline-formula> power dissipation for a low-frequency input. The measured differential and integral nonlinearity results are within 0.30 LSB and 0.43 LSB, respectively.
1.2
0.2
0.2
0.2
0.2
0.007692
0
0
0
0
0
0
0
0
A 0.024mm2 8b 400MS/s SAR ADC with 2b/cycle and resistive DAC in 65nm CMOS.
A 6.2mW 7b 3.5GS/s time interleaved 2-stage pipelined ADC in 40nm CMOS A 7b time interleaved hybrid ADC in 40nm CMOS is presented. The ADC consists of two pipelined stages and combines an intrinsically linear SAR with a fully calibrated binary search architecture to achieve energy efficiency. The first stage of each channel consists of a 3b SAR followed by a dynamic amplifier merged with a comparator. The second stage is a 3b comparator-based asynchronous binary search with threshold calibration to compensate amplifier nonlinearity. The calibration references are generated on chip by using the DAC embedded in the first stage. The prototype achieves a peak SNDR of 38dB at 3.5GS/s while consuming approximately 6.2mW.
A 4.5-mW 8-b 750-MS/s 2-b/step asynchronous subranged SAR ADC in 28-nm CMOS technology A 8-b 2-b/step asynchronous subranged SAR ADC is presented. It incorporates subranging technique to obtain fast reference settling for MSB conversion. The capacitive interpolation reduces number of NMOS switches and lowers matching requirement of a resistive DAC. The proposed timing scheme avoids the need of specific duty cycle of external clock for defining sampling period in a conventional asynchronous SAR ADC. Operating at 750 MS/s, this ADC consumes 4.5 mW from 1-V supply, achieves ENOB of 7.2 and FOM of 41 fJ/conversion-step. It is fabricated in 28-nm CMOS technology and occupies an active area of 0.004 mm2.
A 2.2mW 5b 1.75GS/s Folding Flash ADC in 90nm Digital CMOS
A 10b 100MS/s 1.13mW SAR ADC with binary-scaled error compensation This paper presents a 10 b SAR ADC with a binary-scaled error compensation technique. The prototype occupies an active area of 155 × 165 ¿m2 in 65 nm CMOS. At 100 MS/S, the ADC achieves an SNDR of 59.0 dB and an SFDR of 75.6 dB, while consuming 1.13 mW from a 1.2 V supply. The FoM is 15.5 fJ/conversion-step.
A Polynomial-Based Time-Varying Filter Structure for the Compensation of Frequency-Response Mismatch Errors in Time-Interleaved ADCs This paper introduces a structure for the compensation of frequency-response mismatch errors in M-channel time-interleaved analog-to-digital converters (ADCs). It makes use of a number of fixed digital filters, approximating differentiators of different orders, and a few variable multipliers that correspond to parameters in polynomial models of the channel frequency responses. Whenever the channel frequency responses change, which occurs from time to time in a practical time-interleaved ADC, it suffices to alter the values of these variable multipliers. In this way, expensive on-line filter design is avoided. The paper includes several design examples that illustrate the properties and capabilities of the proposed structure.
Column-oriented database systems Column-oriented database systems (column-stores) have attracted a lot of attention in the past few years. Column-stores, in a nutshell, store each database table column separately, with attribute values belonging to the same column stored contiguously, compressed, and densely packed, as opposed to traditional database systems that store entire records (rows) one after the other. Reading a subset of a table's columns becomes faster, at the potential expense of excessive disk-head seeking from column to column for scattered reads or updates. After several dozens of research papers and at least a dozen of new column-store start-ups, several questions remain. Are these a new breed of systems or simply old wine in new bottles? How easily can a major row-based system achieve column-store performance? Are column-stores the answer to effortlessly support large-scale data-intensive applications? What are the new, exciting system research problems to tackle? What are the new applications that can be potentially enabled by column-stores? In this tutorial, we present an overview of column-oriented database system technology and address these and other related questions.
Data reorganization in memory using 3D-stacked DRAM In this paper we focus on common data reorganization operations such as shuffle, pack/unpack, swap, transpose, and layout transformations. Although these operations simply relocate the data in the memory, they are costly on conventional systems mainly due to inefficient access patterns, limited data reuse and roundtrip data traversal throughout the memory hierarchy. This paper presents a two pronged approach for efficient data reorganization, which combines (i) a proposed DRAM-aware reshape accelerator integrated within 3D-stacked DRAM, and (ii) a mathematical framework that is used to represent and optimize the reorganization operations. We evaluate our proposed system through two major use cases. First, we demonstrate the reshape accelerator in performing a physical address remapping via data layout transform to utilize the internal parallelism/locality of the 3D-stacked DRAM structure more efficiently for general purpose workloads. Then, we focus on offloading and accelerating commonly used data reorganization routines selected from the Intel Math Kernel Library package. We evaluate the energy and performance benefits of our approach by comparing it against existing optimized implementations on state-of-the-art GPUs and CPUs. For the various test cases, in-memory data reorganization provides orders of magnitude performance and energy efficiency improvements via low overhead hardware.
MapGraph: A High Level API for Fast Development of High Performance Graph Analytics on GPUs High performance graph analytics are critical for a long list of application domains. In recent years, the rapid advancement of many-core processors, in particular graphical processing units (GPUs), has sparked a broad interest in developing high performance parallel graph programs on these architectures. However, the SIMT architecture used in GPUs places particular constraints on both the design and implementation of the algorithms and data structures, making the development of such programs difficult and time-consuming. We present MapGraph, a high performance parallel graph programming framework that delivers up to 3 billion Traversed Edges Per Second (TEPS) on a GPU. MapGraph provides a high-level abstraction that makes it easy to write graph programs and obtain good parallel speedups on GPUs. To deliver high performance, MapGraph dynamically chooses among different scheduling strategies depending on the size of the frontier and the size of the adjacency lists for the vertices in the frontier. In addition, a Structure Of Arrays (SOA) pattern is used to ensure coalesced memory access. Our experiments show that, for many graph analytics algorithms, an implementation, with our abstraction, is up to two orders of magnitude faster than a parallel CPU implementation and is comparable to state-of-the-art, manually optimized GPU implementations. In addition, with our abstraction, new graph analytics can be developed with relatively little effort.
Max-Min D-Cluster Formation in Wireless Ad Hoc Networks An ad hoc network may be logically represented as a set of clusters. The clusterheads form a -hop dominating set. Each node is at most hops from a clusterhead. Clusterheads form a virtual backbone and may be used to route packets for nodes in their cluster. Previous heuristics restricted themselves to -hop clusters. We show that the minimum -hop dominating set problem is NP-complete. Then we present a heuristic to form -clusters in a wireless ad hoc network. Nodes are assumed to have non-deterministic mobility pattern. Clusters are formed by diffusing node identities along the wireless links. When the heuristic terminates, a node either becomes a clusterhead, or is at most wireless hops away from its clusterhead. The value of is a parameter of the heuristic. The heuristic can be run either at regular intervals, or whenever the network configura- tion changes. One of the features of the heuristic is that it t ends to re-elect existing clusterheads even when the network configuration c hanges. This helps to reduce the communication overheads during transition from old clusterheads to new clusterheads. Also, there is a tendency to evenly dis- tribute the mobile nodes among the clusterheads, and evently distribute the responsibility of acting as clusterheads among all nodes. Thus, the heuristic is fair and stable. Simulation experiments demonstrate that the proposed heuristic is better than the two earlier heuristics, namely the LCA (1) and Degree based (11) solutions.
On classification with incomplete data. We address the incomplete-data problem in which feature vectors to be classified are missing data (features). A (supervised) logistic regression algorithm for the classification of incomplete data is developed. Single or multiple imputation for the missing data is avoided by performing analytic integration with an estimated conditional density function (conditioned on the observed data). Conditional density functions are estimated using a Gaussian mixture model (GMM), with parameter estimation performed using both Expectation-Maximization (EM) and Variational Bayesian EM (VB-EM). The proposed supervised algorithm is then extended to the semisupervised case by incorporating graph-based regularization. The semisupervised algorithm utilizes all available data-both incomplete and complete, as well as labeled and unlabeled. Experimental results of the proposed classification algorithms are shown.
Study of Subharmonically Injection-Locked PLLs A complete analysis on subharmonically injection-locked PLLs develops fundamental theory for subharmonic locking phenomenon. It explains the noise shaping phenomenon, locking range and behavior, PVT tolerance, and pseudo locking issue. All of the analyses are verified by real chip measurements. Two 20-GHz PLLs based on the proposed theory are designed and fabricated in 90-nm CMOS technology to dem...
An emergency communication system based on software-defined radio. Wireless telecommunications represent an important asset for public protection and disaster relief (PPDR) organizations as they improve the coordination and the distribution of information among first responders in the field. In large international disaster scenarios, many different PPDR organizations may participate to the response phase of disaster management. In this context, PPDR organizations may use different wireless communication technologies; such diversity may create interoperability barriers and degrade the coordination among first-time responders. In this paper, we present the design, the integration, and the testing of a demonstration system based on software-defined radio (SDR) technology and software communication architecture (SCA) to support PPDR operations with special focus on the provision of satellite communications. This paper describes the main components of the demonstration system, the integration activities as well as the testing scenarios, which were used to evaluate the technical feasibility. The paper also describes the main technical challenges in the implementation and integration of the demonstration system. Finally, future developments for this technology and potential deployment challenges are presented.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.05184
0.05
0.025955
0.012744
0.006355
0.000227
0.000054
0.000007
0.000001
0
0
0
0
0
Improved Switched System Approach to Networked Control Systems With Time-Varying Delays An improved switched system approach is proposed for the stability and stabilization of networked control systems with time-varying delays. The approach features a mode-dependent state feedback controller guaranteeing the exponential stability of the closed-loop system, which is implemented within the packet-based control framework. The effectiveness of the proposed approach is finally demonstrated experimentally by a networked inverted pendulum control system.
Offset-Free Model Predictive Control for the Power Control of Three-Phase AC/DC Converters This paper describes an offset-free model predictive control (MPC) algorithm using a disturbance observer (DOB) to control the active/reactive powers of a three-phase AC/DC converter. The strategy of this paper is twofold. One is the use of DOB to remove the offset error and the other is the proper choice of the weighting matrices of a cost index to provide fast error decay with small overshoot. The DOB is designed to estimate the unknown disturbances of the AC/DC converter following the standard Luenberger observer design procedure. The proposed MPC minimizes a one-step-ahead cost index penalizing the predicted tracking error by performing a simple membership test without any use of numerical methods. A systematic way for choosing the weights of the cost index, which guarantees the global stability of the closed-loop system, is proposed. Use of the DOB eliminates the offset tracking errors in the real implementation. Using a 25 􀀀 kW AC/DC converter, it is experimentally shown that the proposed MPC enhances the power tracking performance while considerably reducing the mutual interference of the active/reactive powers as well as the output voltage.
Further results on cloud control systems. This paper is devoted to further investigating the cloud control systems (CCSs). The benefits and challenges of CCSs are provided. Both new research results of ours and some typical work made by other researchers are presented. It is believed that the CCSs can have huge and promising effects due to their potential advantages.
Moving Horizon Estimation for Mobile Robots With Multirate Sampling. This paper investigates the multirate moving horizon estimation (MMHE) problem for mobile robots with inertial sensor and camera, where the sampling rates of the sensors are not identical. In the sense of the multirate systems, some sensors may have no measurements at certain sampling times, which can be regarded as measurement missing and may significantly degrade the estimation performance. A bi...
Real-Time Switched Model Predictive Control for a Cyber-Physical Wind Turbine Emulator The high complexity and nonlinearity of wind turbine (WT) systems impose the utilization of rigorous control methods such as model predictive control (MPC). MPC algorithms are computationally intensive requiring investigation of real-time implementability and feasibility in addition to control performance metrics. In this article, a switched model predictive controller (SMPC) is developed, implemented, and investigated for control objectives performance and real-time metrics. This article has two main contributions. First, embedded real-time SMPC is developed using qpOASES as an embedded solver for the online optimal control problem. Second, a cyber-physical real-time emulator for variable-speed variable-pitch utility-scale WT is developed and implemented on an xPC target machine using a high-fidelity linear parameter-varying model. The SMPC is evaluated on the fatigue, aerodynamics, structures, and turbulence (FAST) design code and the real-time emulator. The analysis and investigation of results highlight the feasibility and capability of SMPC for handling control objectives of WT systems within real-time using short control periods.
A New Delay-Compensation Scheme for Networked Control Systems in Controller Area Networks. In this work, we aim to study a new delay-compensation algorithm for networked control systems (NCSs) which are connected via the controller area network (CAN) buses. First, we analyze the property of CAN bus and find the main sources of CAN-bus-induced delays. The system controlled through a CAN bus is formulated into the typical framework of NCSs. Different from the traditional state feedback or...
A Bayesian Method for the Induction of Probabilistic Networks from Data This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction of probabilistic expert systems. We extend the basic method to handle missing data and hidden (latent) variables. We show how to perform probabilistic inference by averaging over the inferences of multiple belief networks. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. Finally, we relate the methods in this paper to previous work, and we discuss open problems.
Distributed estimation and quantization An algorithm is developed for the design of a nonlinear, n-sensor, distributed estimation system subject to communication and computation constraints. The algorithm uses only bivariate probability distributions and yields locally optimal estimators that satisfy the required system constraints. It is shown that the algorithm is a generalization of the classical Lloyd-Max results
Supporting Aggregate Queries Over Ad-Hoc Wireless Sensor Networks We show how the database community's notion of a generic query interface for data aggregation can be applied to ad-hoc networks of sensor devices. As has been noted in the sensor network literature, aggregation is important as a data reduction tool; networking approaches, however, have focused on application specific solutions, whereas our in-network aggregation approach is driven by a general purpose, SQL-style interface that can execute queries over any type of sensor data while providing opportunities for significant optimization. We present a variety of techniques to improve the reliability and performance of our solution. We also show how grouped aggregates can be efficiently computed and offer a comparison to related systems and database projects.
Exploiting ILP, TLP, and DLP with the polymorphous TRIPS architecture This paper describes the polymorphous TRIPS architecture which can be configured for different granularities and types of parallelism. TRIPS contains mechanisms that enable the processing cores and the on-chip memory system to be configured and combined in different modes for instruction, data, or thread-level parallelism. To adapt to small and large-grain concurrency, the TRIPS architecture contains four out-of-order, 16-wide-issue Grid Processor cores, which can be partitioned when easily extractable fine-grained parallelism exists. This approach to polymorphism provides better performance across a wide range of application types than an approach in which many small processors are aggregated to run workloads with irregular parallelism. Our results show that high performance can be obtained in each of the three modes--ILP, TLP, and DLP-demonstrating the viability of the polymorphous coarse-grained approach for future microprocessors.
A 10-Gb/s CMOS clock and data recovery circuit with a half-rate binary phase/frequency detector A 10-Gb/s phase-locked clock and data recovery circuit incorporates a multiphase LC oscillator and a half-rate phase/frequency detector with automatic data retiming. Fabricated in 0.18-μm CMOS technology in an area of 1.75×1.55 mm2, the circuit exhibits a capture range of 1.43 GHz, an rms jitter of 0.8 ps, a peak-to-peak jitter of 9.9 ps, and a bit error rate of 10-9 with a pseudorandom bit sequence of 223-1. The power dissipation excluding the output buffers is 91 mW from a 1.8-V supply.
A Dht-Based Discovery Service For The Internet Of Things Current trends towards the Future Internet are envisaging the conception of novel services endowed with context-aware and autonomic capabilities to improve end users' quality of life. The Internet of Things paradigm is expected to contribute towards this ambitious vision by proposing models and mechanisms enabling the creation of networks of "smart things" on a large scale. It is widely recognized that efficient mechanisms for discovering available resources and capabilities are required to realize such vision. The contribution of this work consists in a novel discovery service for the Internet of Things. The proposed solution adopts a peer-to-peer approach for guaranteeing scalability, robustness, and easy maintenance of the overall system. While most existing peer-to-peer discovery services proposed for the IoT support solely exact match queries on a single attribute (i. e., the object identifier), our solution can handle multiattribute and range queries. We defined a layered approach by distinguishing three main aspects: multiattribute indexing, range query support, peer-to-peer routing. We chose to adopt an over-DHT indexing scheme to guarantee ease of design and implementation principles. We report on the implementation of a Proof of Concept in a dangerous goods monitoring scenario, and, finally, we discuss test results for structural properties and query performance evaluation.
Current-mode adaptively hysteretic control for buck converters with fast transient response and improved output regulation This paper presents a current-mode adaptively hysteretic control (CMAHC) technique to achieve the fast transient response for DC-DC buck converters. A complementary full range current sensor comprising of chargingpath and discharging-path sensing transistors is proposed to track the inductor current seamlessly. With the proposed current-mode adaptively hysteretic topology, the inductor current is continuously monitored, and the adaptively hysteretic threshold is dynamically adjusted according to the feedback information comes from the output voltage level. Therefore, a fast load-transient response can be achieved. Besides, the output regulation performance is also improved by the proposed dynamic current-scaling circuitry (DCSC). Moreover, the proposed CMAHC topology can be used in a nearly zero RESR design configuration. The prototype fabricated using TSMC 0.25μm CMOS process occupies the area of 1.78mm2 including all bonding pads. Experimental results show that the output voltage ripple is smaller than 30mV over a wide loading current from 0 mA to 500 mA with maximum power conversion efficiency higher than 90%. The recovery time from light to heavy load (100 to 500 mA) is smaller than 5μs.
A VCO-Based Nonuniform Sampling ADC Using a Slope-Dependent Pulse Generator This paper presents a voltage-controlled oscillator (VCO)-based nonuniform sampling analog-to-digital converter (ADC) as an alternative to the level-crossing (LC)-based converters for digitizing biopotential signals. This work aims to provide a good signal-to-noise-and-distortion ratio at a low average sampling rate. In the proposed conversion method, a slope-dependent pulse generation block is used to provide a variable sample rate adjusted according to the input signal's slope. Simulation results show that the introduced method meets a target reconstruction quality with a sampling rate approaching 92 Sps, while on the same MIT-BIH Arrhythmia N 106 ECG benchmark, the classic LC-based approach requires a sampling rate higher than 500 Sps. The benefits of the proposed method are more remarkable when the input signal is very noisy. The proposed ADC achieves a compression ratio close to 4, but with only 5.4% root-mean-square difference when tested using the MIT-BIH Arrhythmia Database.
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
Dead-beat terminal sliding-mode control: A guaranteed attractiveness approach This paper presents a design method of terminal sliding mode control with the guaranteed attractiveness, for which the tracking performance of the closed-loop system undertaken is governed by a discretized finite-time system (FTS). To alleviate chattering effectively, this paper conducts the controller derivation according to the dead-beat reaching law that defines the sliding variable with the smoothing exponential-rate FTS in discrete-time. The closed-loop performance improvement is accomplished by applying a finite-difference uncertainty observer, and embedding the measure of uncertainty compensation in the error dynamics. The steady-state error band, absolute attracting layer and the monotone decreasing region of the error dynamics are derived in detail. The experimental result verifies not only the ease and efficiency in controller design, by the discretized FTS approach, but also the attractiveness and robustness properties assured in the closed-loop system.
A Probabilistic Neural-Fuzzy Learning System for Stochastic Modeling A probabilistic fuzzy neural network (PFNN) with a hybrid learning mechanism is proposed to handle complex stochastic uncertainties. Fuzzy logic systems (FLSs) are well known for vagueness processing. Embedded with the probabilistic method, an FLS will possess the capability to capture stochastic uncertainties. Further enhanced with the neural learning, it will be able to work under time-varying stochastic environment. Integrated with a statistical process control (SPC) based monitoring method, the PFNN can maintain the robust modeling performance. Finally, the successful simulation demonstrates the modeling effectiveness of the proposed PFNN under the time-varying stochastic conditions.
Design of Fuzzy-Neural-Network-Inherited Backstepping Control for Robot Manipulator Including Actuator Dynamics This study presents the design and analysis of an intelligent control system that inherits the systematic and recursive design methodology for an n-link robot manipulator, including actuator dynamics, in order to achieve a high-precision position tracking with a firm stability and robustness. First, the coupled higher order dynamic model of an n-link robot manipulator is introduced briefly. Then, a conventional backstepping control (BSC) scheme is developed for the joint position tracking of the robot manipulator. Moreover, a fuzzy-neural-network-inherited BSC (FNNIBSC) scheme is proposed to relax the requirement of detailed system information to improve the robustness of BSC and to deal with the serious chattering that is caused by the discontinuous function. In the FNNIBSC strategy, the FNN framework is designed to mimic the BSC law, and adaptive tuning algorithms for network parameters are derived in the sense of the projection algorithm and Lyapunov stability theorem to ensure the network convergence as well as stable control performance. Numerical simulations and experimental results of a two-link robot manipulator that are actuated by dc servomotors are provided to justify the claims of the proposed FNNIBSC system, and the superiority of the proposed FNNIBSC scheme is also evaluated by quantitative comparison with previous intelligent control schemes.
Reactive Power Control of Three-Phase Grid-Connected PV System During Grid Faults Using Takagi–Sugeno–Kang Probabilistic Fuzzy Neural Network Control An intelligent controller based on the Takagi-Sugeno-Kang-type probabilistic fuzzy neural network with an asymmetric membership function (TSKPFNN-AMF) is developed in this paper for the reactive and active power control of a three-phase grid-connected photovoltaic (PV) system during grid faults. The inverter of the three-phase grid-connected PV system should provide a proper ratio of reactive power to meet the low-voltage ride through (LVRT) regulations and control the output current without exceeding the maximum current limit simultaneously during grid faults. Therefore, the proposed intelligent controller regulates the value of reactive power to a new reference value, which complies with the regulations of LVRT under grid faults. Moreover, a dual-mode operation control method of the converter and inverter of the three-phase grid-connected PV system is designed to eliminate the fluctuation of dc-link bus voltage under grid faults. Furthermore, the network structure, the online learning algorithm, and the convergence analysis of the TSKPFNN-AMF are described in detail. Finally, some experimental results are illustrated to show the effectiveness of the proposed control for the three-phase grid-connected PV system.
Discrete-Time Quasi-Sliding-Mode Control With Prescribed Performance Function and its Application to Piezo-Actuated Positioning Systems. In this paper, the constrained control problem of the prescribed performance control technique is discussed in discrete-time domain for single input-single output dynamical systems. The goal of this design is to maintain the tracking error trajectory in a predefined convergence zone described by a performance function in the presence of the uncertainties. In order to achieve this goal, the discret...
Sliding mode control for singularly perturbed Markov jump descriptor systems with nonlinear perturbation This paper develops a stochastic integral sliding mode control strategy for singularly perturbed Markov jump descriptor systems subject to nonlinear perturbation. The transition probabilities (TPs) for the system modes are considered to switch randomly within a finite set. We first present a novel mode and switch-dependent integral switching surface, based upon which the resulting sliding mode dynamics (SMD) only suffers from the unmatched perturbation that is not amplified in the Euclidean norm sense. To overcome the difficulty of synthesizing the nominal controller, we rewrite the SMD into the equivalent descriptor form. By virtue of the fixed-point principle and stochastic system theory, we give a rigorous proof for the existence and uniqueness of the solution and the mean-square exponential admissibility for the transformed SMD. A generalized framework that covers arbitrary switching and Markov switching of the TPs as special cases is further achieved. Then, by analyzing the stochastic reachability of the sliding motion, we synthesize a mode and switch-dependent SMC law. The adaptive technique is further integrated to estimate the unavailable boundaries of the matched perturbation. Finally, simulation results on an electronic circuit system confirm the validity and benefits of the developed control strategy.
The part-time parliament Recent archaeological discoveries on the island of Paxos reveal that the parliament functioned despite the peripatetic propensity of its part-time legislators. The legislators maintained consistent copies of the parliamentary record, despite their frequent forays from the chamber and the forgetfulness of their messengers. The Paxon parliament's protocol provides a new way of implementing the state machine approach to the design of distributed systems.
Design Techniques for Fully Integrated Switched-Capacitor DC-DC Converters. This paper describes design techniques to maximize the efficiency and power density of fully integrated switched-capacitor (SC) DC-DC converters. Circuit design methods are proposed to enable simplified gate drivers while supporting multiple topologies (and hence output voltages). These methods are verified by a proof-of-concept converter prototype implemented in 0.374 mm2 of a 32 nm SOI process. ...
Distributed reset A reset subsystem is designed that can be embedded in an arbitrary distributed system in order to allow the system processes to reset the system when necessary. Our design is layered, and comprises three main components: a leader election, a spanning tree construction, and a diffusing computation. Each of these components is self-stabilizing in the following sense: if the coordination between the up-processes in the system is ever lost (due to failures or repairs of processes and channels), then each component eventually reaches a state where coordination is regained. This capability makes our reset subsystem very robust: it can tolerate fail-stop failures and repairs of processes and channels, even when a reset is in progress
Distributed multi-agent optimization with state-dependent communication We study distributed algorithms for solving global optimization problems in which the objective function is the sum of local objective functions of agents and the constraint set is given by the intersection of local constraint sets of agents. We assume that each agent knows only his own local objective function and constraint set, and exchanges information with the other agents over a randomly varying network topology to update his information state. We assume a state-dependent communication model over this topology: communication is Markovian with respect to the states of the agents and the probability with which the links are available depends on the states of the agents. We study a projected multi-agent subgradient algorithm under state-dependent communication. The state-dependence of the communication introduces significant challenges and couples the study of information exchange with the analysis of subgradient steps and projection errors. We first show that the multi-agent subgradient algorithm when used with a constant stepsize may result in the agent estimates to diverge with probability one. Under some assumptions on the stepsize sequence, we provide convergence rate bounds on a “disagreement metric” between the agent estimates. Our bounds are time-nonhomogeneous in the sense that they depend on the initial starting time. Despite this, we show that agent estimates reach an almost sure consensus and converge to the same optimal solution of the global optimization problem with probability one under different assumptions on the local constraint sets and the stepsize sequence.
Yet another MicroArchitectural Attack:: exploiting I-Cache MicroArchitectural Attacks (MA), which can be considered as a special form of Side-Channel Analysis, exploit microarchitectural functionalities of processor implementations and can compromise the security of computational environments even in the presence of sophisticated protection mechanisms like virtualization and sandboxing. This newly evolving research area has attracted significant interest due to the broad application range and the potentials of these attacks. Cache Analysis and Branch Prediction Analysis were the only types of MA that had been known publicly. In this paper, we introduce Instruction Cache (I-Cache) as yet another source of MA and present our experimental results which clearly prove the practicality and danger of I-Cache Attacks.
A decentralized modular control framework for robust control of FES-activated walker-assisted paraplegic walking using terminal sliding mode and fuzzy logic control. A major challenge to developing functional electrical stimulation (FES) systems for paraplegic walking and widespread acceptance of these systems is the design of a robust control strategy that provides satisfactory tracking performance. The systems need to be robust against time-varying properties of neuromusculoskeletal dynamics, day-to-day variations, subject-to-subject variations, external dis...
PUMP: a programmable unit for metadata processing We introduce the Programmable Unit for Metadata Processing (PUMP), a novel software-hardware element that allows flexible computation with uninterpreted metadata alongside the main computation with modest impact on runtime performance (typically 10--40% for single policies, compared to metadata-free computation on 28 SPEC CPU2006 C, C++, and Fortran programs). While a host of prior work has illustrated the value of ad hoc metadata processing for specific policies, we introduce an architectural model for extensible, programmable metadata processing that can handle arbitrary metadata and arbitrary sets of software-defined rules in the spirit of the time-honored 0-1-∞ rule. Our results show that we can match or exceed the performance of dedicated hardware solutions that use metadata to enforce a single policy, while adding the ability to enforce multiple policies simultaneously and achieving flexibility comparable to software solutions for metadata processing. We demonstrate the PUMP by using it to support four diverse safety and security policies---spatial and temporal memory safety, code and data taint tracking, control-flow integrity including return-oriented-programming protection, and instruction/data separation---and quantify the performance they achieve, both singly and in combination.
A 1V 3.5 μW Bio-AFE With Chopper-Capacitor-Chopper Integrator-Based DSL and Low Power GM-C Filter This brief presents a low-noise, low-power bio-signal acquisition analog front-end (Bio-AFE). It mainly includes a capacitively coupled chopper-stabilized instrumentation amplifier (CCIA), a programmable gain amplifier (PGA), a low-pass filter (LPF), and a successive approximation analog to digital converter (SAR ADC). A chopper-capacitor-chopper integrator based DC servo loop (C3IB-DSL...
1.2
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
FReaC Cache: Folded-logic Reconfigurable Computing in the Last Level Cache The need for higher energy efficiency has resulted in the proliferation of accelerators across platforms, with custom and reconfigurable accelerators adopted in both edge devices and cloud servers. However, existing solutions fall short in providing accelerators with low-latency, high-bandwidth access to the working set and suffer from the high latency and energy cost of data transfers. Such costs can severely limit the smallest granularity of the tasks that can be accelerated and thus the applicability of the accelerators. In this work, we present FReaC Cache, a novel architecture that natively supports reconfigurable computing in the last level cache (LLC), thereby giving energy-efficient accelerators low-latency, high-bandwidth access to the working set. By leveraging the cache's existing dense memory arrays, buses, and logic folding, we construct a reconfigurable fabric in the LLC with minimal changes to the system, processor, cache, and memory architecture. FReaC Cache is a low-latency, low-cost, and low-power alternative to off-die/offchip accelerators, and a flexible, and low-cost alternative to fixed function accelerators. We demonstrate an average speedup of 3X and Perf/W improvements of 6.1X over an edge-class multi-core CPU, and add 3.5% to 15.3% area overhead per cache slice.
Architecture Aware Partitioning Algorithms Existing partitioning algorithms provide limited support for load balancing simulations that are performed on heterogeneous parallel computing platforms. On such architectures, effec- tive load balancing can only be achieved if the graph is distributed so that it properly takes into account the available resources (CPU speed, network bandwidth). With heterogeneous tech- nologies becoming more popular, the need for suitable graph partitioning algorithms is criti- cal. We developed such algorithms that can address the partitioning requirements of scientific computations, and can correctly model the architectural characteristics of emerging hardware platforms.
AMD Fusion APU: Llano The Llano variant of the AMD Fusion accelerated processor unit (APU) deploys AMD Turbo CORE technology to maximize processor performance within the system's thermal design limits. Low-power design and performance/watt ratio optimization were key design approaches, and power gating is implemented pervasively across the APU.
Decoupling Data Supply from Computation for Latency-Tolerant Communication in Heterogeneous Architectures. In today’s computers, heterogeneous processing is used to meet performance targets at manageable power. In adopting increased compute specialization, however, the relative amount of time spent on communication increases. System and software optimizations for communication often come at the costs of increased complexity and reduced portability. The Decoupled Supply-Compute (DeSC) approach offers a way to attack communication latency bottlenecks automatically, while maintaining good portability and low complexity. Our work expands prior Decoupled Access Execute techniques with hardware/software specialization. For a range of workloads, DeSC offers roughly 2 × speedup, and additional specialized compression optimizations reduce traffic between decoupled units by 40%.
Stream Floating: Enabling Proactive and Decentralized Cache Optimizations As multicore systems continue to grow in scale and on-chip memory capacity, the on-chip network bandwidth and latency become problematic bottlenecks. Because of this, overheads in data transfer, the coherence protocol and replacement policies become increasingly important. Unfortunately, even in well-structured programs, many natural optimizations are difficult to implement because of the reactive...
Decentralized Offload-based Execution on Memory-centric Compute Cores.
QsCores: trading dark silicon for scalable energy efficiency with quasi-specific cores Transistor density continues to increase exponentially, but power dissipation per transistor is improving only slightly with each generation of Moore's law. Given the constant chip-level power budgets, this exponentially decreases the percentage of transistors that can switch at full frequency with each technology generation. Hence, while the transistor budget continues to increase exponentially, the power budget has become the dominant limiting factor in processor design. In this regime, utilizing transistors to design specialized cores that optimize energy-per-computation becomes an effective approach to improve system performance. To trade transistors for energy efficiency in a scalable manner, we propose Quasi-specific Cores, or QsCores, specialized processors capable of executing multiple general-purpose computations while providing an order of magnitude more energy efficiency than a general-purpose processor. The QsCores design flow is based on the insight that similar code patterns exist within and across applications. Our approach exploits these similar code patterns to ensure that a small set of specialized cores support a large number of commonly used computations. We evaluate QsCores's ability to target both a single application library (e.g., data structures) as well as a diverse workload consisting of applications selected from different domains (e.g., SPECINT, EEMBC, and Vision). Our results show that QsCores can provide 18.4 x better energy efficiency than general-purpose processors while reducing the amount of specialized logic required to support the workload by up to 66%.
Wireless sensor networks: a survey This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed.
Network-based robust H∞ control of systems with uncertainty This paper is concerned with the design of robust H"~ controllers for uncertain networked control systems (NCSs) with the effects of both the network-induced delay and data dropout taken into consideration. A new analysis method for H"~ performance of NCSs is provided by introducing some slack matrix variables and employing the information of the lower bound of the network-induced delay. The designed H"~ controller is of memoryless type, which can be obtained by solving a set of linear matrix inequalities. Numerical examples and simulation results are given finally to illustrate the effectiveness of the method.
Incremental Stochastic Subgradient Algorithms for Convex Optimization This paper studies the effect of stochastic errors on two constrained incremental subgradient algorithms. The incremental subgradient algorithms are viewed as decentralized network optimization algorithms as applied to minimize a sum of functions, when each component function is known only to a particular agent of a distributed network. First, the standard cyclic incremental subgradient algorithm is studied. In this, the agents form a ring structure and pass the iterate in a cycle. When there are stochastic errors in the subgradient evaluations, sufficient conditions on the moments of the stochastic errors are obtained that guarantee almost sure convergence when a diminishing step-size is used. In addition, almost sure bounds on the algorithm's performance with a constant step-size are also obtained. Next, the Markov randomized incremental subgradient method is studied. This is a noncyclic version of the incremental algorithm where the sequence of computing agents is modeled as a time nonhomogeneous Markov chain. Such a model is appropriate for mobile networks, as the network topology changes across time in these networks. Convergence results and error bounds for the Markov randomized method in the presence of stochastic errors for diminishing and constant step-sizes are obtained.
Wireless communications in the twenty-first century: a perspective Wireless communications are expected to be the dominant mode of access technology in the next century. Besides voice, a new range of services such as multimedia, high-speed data, etc. are being offered for delivery over wireless networks. Mobility will be seamless, realizing the concept of persons being in contact anywhere, at any time. Two developments are likely to have a substantial impact on t...
A 60-GHz 16QAM/8PSK/QPSK/BPSK Direct-Conversion Transceiver for IEEE802.15.3c. This paper presents a 60-GHz direct-conversion transceiver using 60-GHz quadrature oscillators. The transceiver has been fabricated in a standard 65-nm CMOS process. It in cludes a receiver with a 17.3-dB conversion gain and less than 8.0-dB noise figure, a transmitter with a 18.3-dB conversion gain, a 9.5-dBm output 1 dB compression point, a 10.9-dBm saturation output power and 8.8-% power added ...
Reduction and IR-drop compensations techniques for reliable neuromorphic computing systems Neuromorphic computing system (NCS) is a promising architecture to combat the well-known memory bottleneck in Von Neumann architecture. The recent breakthrough on memristor devices made an important step toward realizing a low-power, small-footprint NCS on-a-chip. However, the currently low manufacturing reliability of nano-devices and the voltage IR-drop along metal wires and memristors arrays severely limits the scale of memristor crossbar based NCS and hinders the design scalability. In this work, we propose a novel system reduction scheme that significantly lowers the required dimension of the memristor crossbars in NCS while maintaining high computing accuracy. An IR-drop compensation technique is also proposed to overcome the adverse impacts of the wire resistance and the sneak-path problem in large memristor crossbar designs. Our simulation results show that the proposed techniques can improve computing accuracy by 27.0% and 38.7% less circuit area compared to the original NCS design.
A Bidirectional Neural Interface IC With Chopper Stabilized BioADC Array and Charge Balanced Stimulator. We present a bidirectional neural interface with a 4-channel biopotential analog-to-digital converter (bioADC) and a 4-channel current-mode stimulator in 180 nm CMOS. The bioADC directly transduces microvolt biopotentials into a digital representation without a voltage-amplification stage. Each bioADC channel comprises a continuous-time first-order ΔΣ modulator with a chopper-stabilized OTA input ...
1.2
0.2
0.2
0.2
0.2
0.1
0.02
0
0
0
0
0
0
0
Signal receiving and processing platform of the experimental passive radar for intelligent surveillance system using software defined radio approach This document presents a signal receiving and processing platform for an experimental FM radio based multistatic passive radar utilizing Software Defined Radio. This radar was designed as a part of the intelligent surveillance system. Our platform consists of a reconfigurable multi-sensor antenna, radio frequency (RF) front-end hardware and personal computer host executing modified GNU Radio code. As the RF hardware (receiver and downconverter) the Universal Software Radio Peripherals were used. We present and discuss different approaches to construct the multichannel receiver and signal processing platform for passive radar utilizing USRP devices and GNU Radio. Received signals after downconverting on the FPGA of the USRP are transmitted to the PC host where second stage of data processing takes place: namely digital beamforming. Digital beamforming algorithm estimates echo signals reflected from flying target. After estimating echo signals Range-Doppler surfaces can be computed in order to estimate the target position.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Clock-Phase Reuse Technique for Discrete-Time Bandpass Filters In this article, we apply a new clock-phase reuse technique to a discrete-time infinite impulse response (IIR) complex-signaling bandpass filter (BPF). This leads to a deep improvement in filtering, especially the stopband rejection, while maintaining the area, sampling frequency, and the number of clock phases and their pulsewidths. Fabricated in 28-nm CMOS, the proposed BPF is highly tuneable an...
A Charge-Rotating IIR Filter with Linear Interpolation and High Stop-Band Rejection This paper introduces a new architecture of a discrete-time charge-rotating low-pass filter (LPF) which achieves a high-order of filtering and improves its stop-band rejection while maintaining a reasonable duty cycle of the main clock at 20%. Its key innovation is a linear interpolation within the charge-accumulation operation. Fabricated in 28-nm CMOS, the proposed IIR LPF demonstrates a 1-9.9MH...
An 8-bit 100-mhz cmos linear interpolation dac An 8-bit 100-MHz CMOS linear interpolation digital-to-analog converter (DAC) is presented. It applies a time-interleaved structure on an 8-bit binary-weighted DAC, using 16 evenly skewed clocks generated by a voltage-controlled delay line to realize the linear interpolation function. The linear interpolation increases the attenuation of the DAC&#39;s image components. The requirement for the analog re...
A Quadrature Charge-Domain Sampling Mixer With Embedded FIR, IIR, and N-Path Filters This paper presents the analysis and design of a quadrature charge-domain down-conversion sampling mixer with embedded finite-impulse-response (FIR), infinite-impulse-response (IIR), and 4-path bandpass filters. An in-depth investigation of the principles of periodic impulse sampling, periodic windowed sampling, and periodic N-path windowed sampling is presented and their characteristics are compared. A detailed mathematical treatment of charge-domain windowed samplers with built-in sinc, FIR and IIR filters is provided. A quadrature charge-domain sampler with embedded FIR, IIR, and 4-path band-pass filters is proposed and its performance including the effect of the non-idealities of devices is investigated. The proposed design is implemented in an IBM 130 nm 1.2 V CMOS technology. Simulation results demonstrate that the proposed design is tunable over 50–250 MHz frequency range. For an 100 MHz input, the proposed design exhibits 70 dB aliasing rejection, 60 dB stop band attenuation while consuming current of 145 . The performance of the proposed design is further validated using the results of on-wafer measurement of the fabricated microchip.
Low-Power Highly Selective Channel Filtering Using a Transconductor–Capacitor Analog FIR Analog finite-impulse-response (AFIR) filtering is proposed to realize low-power channel selection filters for the Internet-of-Things receivers. High selectivity is achieved using an architecture based on only a single—time-varying—transconductance and integration capacitor. The transconductance is implemented as a digital-to-analog converter and is programmable by an on-chip memory. The AFIR operating principle is shown step by step, including its complete transfer function with aliasing. The filter bandwidth and transfer function are highly programmable through the transconductance coefficients and clock frequency. Moreover, the transconductance programmability allows an almost ideal filter response to be realized by careful analysis and compensation of the parasitic circuit impairments. The filter, manufactured in 22-nm FDSOI, has an active area of 0.09 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> . Its bandwidth can be accurately tuned from 0.06 to 3.4 MHz. The filter consumes 92 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> from a 700-mV supply. This low power consumption is combined with a high selectivity: <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\text{f}_{-60\,\text {dB}}/\text{f}_{-3\,\text {dB}}\,\,=$ </tex-math></inline-formula> 3.8. The filter has 31.5-dB gain and 12-nV/ <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\sqrt {\text {Hz}}$ </tex-math></inline-formula> input-referred noise for a 0.43-MHz bandwidth. The OIP3 is 28 dBm, independent of the frequency offset. The output-referred 1-dB-compression point is 3.7 dBm, and the in-band gain compresses by 1 dB for an −3.7-dBm out-of-band input signal while still providing >60 dB of filtering.
Enhanced-Selectivity High-Linearity Low-Noise Mixer-First Receiver With Complex Pole Pair Due to Capacitive Positive Feedback. A mixer-first receiver (RX) with enhanced selectivity and high dynamic range is proposed, targeting to remove surface acoustic-wave-filters in mobile phones and cover all frequency bands up to 6 GHz. Capacitive negative feedback across the baseband (BB) amplifier serves as a blocker bypassing path, while an extra capacitive positive feedback path offers further blocker rejection. This combination ...
Chord: a scalable peer-to-peer lookup protocol for internet applications A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.
Time-delay systems: an overview of some recent advances and open problems After presenting some motivations for the study of time-delay system, this paper recalls modifications (models, stability, structure) arising from the presence of the delay phenomenon. A brief overview of some control approaches is then provided, the sliding mode and time-delay controls in particular. Lastly, some open problems are discussed: the constructive use of the delayed inputs, the digital implementation of distributed delays, the control via the delay, and the handling of information related to the delay value.
Bayesian Network Classifiers Recent work in supervised learning has shown that a surprisinglysimple Bayesian classifier with strong assumptions of independence amongfeatures, called naive Bayes, is competitive withstate-of-the-art classifiers such as C4.5. This fact raises the question ofwhether a classifier with less restrictive assumptions can perform evenbetter. In this paper we evaluate approaches for inducing classifiers fromdata, based on the theory of learning Bayesian networks. These networks are factored representations ofprobability distributions that generalize the naive Bayesian classifier andexplicitly represent statements about independence. Among these approacheswe single out a method we call Tree Augmented Naive Bayes (TAN), which outperforms naive Bayes, yet at the same timemaintains the computational simplicity (no search involved) and robustnessthat characterize naive Bayes. We experimentally tested these approaches,using problems from the University of California at Irvine repository, and compared them to C4.5, naive Bayes, and wrapper methods for featureselection.
A theory of nonsubtractive dither A detailed mathematical investigation of multibit quantizing systems using nonsubtractive dither is presented. It is shown that by the use of dither having a suitably chosen probability density function, moments of the total error can be made independent of the system input signal but that statistical independence of the error and the input signals is not achievable. Similarly, it is demonstrated that values of the total error signal cannot generally be rendered statistically independent of one another but that their joint moments can be controlled and that, in particular, the error sequence can be rendered spectrally white. The properties of some practical dither signals are explored, and recommendations are made for dithering in audio, video, and measurement applications. The paper collects all of the important results on the subject of nonsubtractive dithering and introduces important new ones with the goal of alleviating persistent and widespread misunderstandings regarding the technique
Codejail: Application-Transparent Isolation of Libraries with Tight Program Interactions.
An Opportunistic Cognitive MAC Protocol for Coexistence with WLAN In last decades, the demand of wireless spectrum has increased rapidly with the development of mobile communication services. Recent studies recognize that traditional fixed spectrum assignment does not use spectrum efficiently. Such a wasting phenomenon could be amended after the present of cognitive radio. Cognitive radio is a new type of technology that enables secondary usage to unlicensed user. This paper presents an opportunistic cognitive MAC protocol (OC-MAC) for cognitive radios to access unoccupied resource opportunistically and coexist with wireless local area network (WLAN). By a primary traffic predication model and transmission etiquette, OC-MAC avoids producing fatal damage to licensed users. Then a ns2 simulation model is developed to evaluate its performance in scenarios with coexisting WLAN and cognitive network.
20.3 A feedforward controlled on-chip switched-capacitor voltage regulator delivering 10W in 32nm SOI CMOS On-chip (or fully integrated) switched-capacitor (SC) voltage regulators (SCVR) have recently received a lot of attention due to their ease of monolithic integration. The use of deep trench capacitors can lead to SCVR implementations that simultaneously achieve high efficiency, high power density, and fast response time. For the application of granular power distribution of many-core microprocessor systems, the on-chip SCVR must maintain an output voltage above a certain minimum level Uout, min in order for the microprocessor core to meet setup time requirements. Following a transient load change, the output voltage typically exhibits a droop due to parasitic inductances and resistances in the power distribution network. Therefore, the steady-state output voltage is kept high enough to ensure VOUT >Vout, min at all times, thereby introducing an output voltage overhead that leads to increased system power consumption. The output voltage droop can be reduced by implementing fast regulation and a sufficient amount of on-chip decoupling capacitance. However, a large amount of on-chip decoupling capacitance is needed to significantly reduce the droop, and it becomes impractical to implement owing to the large chip area overhead required. This paper presents a feedforward control scheme that significantly reduces the output voltage droop in the presence of a large input voltage droop following a transient event. This in turn reduces the required output voltage overhead and may lead to significant overall system power savings.
Neuropixels Data-Acquisition System: A Scalable Platform for Parallel Recording of 10,000+ Electrophysiological Signals. Although CMOS fabrication has enabled a quick evolution in the design of high-density neural probes and neural-recording chips, the scaling and miniaturization of the complete data-acquisition systems has happened at a slower pace. This is mainly due to the complexity and the many requirements that change depending on the specific experimental settings. In essence, the fundamental challenge of a n...
1.1
0.1
0.1
0.1
0.1
0.025
0
0
0
0
0
0
0
0
A Self-Powered and Optimal SSHI Circuit Integrated With an Active Rectifier for Piezoelectric Energy Harvesting. This paper presents a piezoelectric energy harvesting circuit, which integrates a Synchronized Switch Harvesting on Inductor (SSHI) circuit and an active rectifier. The major design challenge of the SSHI method is flipping the capacitor voltage at optimal times. The proposed SSHI circuit inserts an active diode on each resonant loop, which ensures flipping of the capacitor voltage at optimal times...
An Inductorless Bias-Flip Rectifier for Piezoelectric Energy Harvesting. Piezoelectric vibration energy harvesters have drawn much interest for powering self-sustained electronic devices. Furthermore, the continuous push toward miniaturization and higher levels of integration continues to form key drivers for autonomous sensor systems being developed as parts of the emerging Internet of Things (IoT) paradigm. The synchronized switch harvesting (SSH) on inductor and syn...
A 90.2% Peak Efficiency Multi-Input Single-Inductor Multi-Output Energy Harvesting Interface With Double-Conversion Rejection Technique and Buck-Based Dual-Conversion Mode This article presents a multi-input single-inductor multi-output energy-harvesting interface that extracts power from three independent sources and regulates three output voltages. The converter employs the proposed double-conversion rejection technique to reduce the double-converted power by up to 81.8% under the light-load condition and operates in various power conversion modes, including the proposed buck-based dual-conversion mode, to improve the power conversion efficiency and maximum load power. The proposed adaptive peak inductor current controller determines the inductor charging period, and the proposed digitally controlled zero-current detector detects the optimum zero-current point according to the operating mode. The proposed converter achieves a peak end-to-end efficiency of 90.2% and a maximum output power of 24 mW, indicating the improvements of approximately 7.52% and 1.85 times, respectively, compared with those of conventional buck-boost converters.
A Self-Powered P-SSHI Array Interface for Piezoelectric Energy Harvesters With Arbitrary Phase Difference Piezoelectric energy harvester (PEH) arrays are promising in many application scenarios. However, few interface circuits have been developed to manage the multiple ac inputs from PEHs. This article proposes an extensible parallel synchronized switch harvesting on inductor array interface scheme to realize a multiinput conversion from PEH arrays. We develop a split-inductor-capacitor topology that ...
A Parallel-SSHI Rectifier for Piezoelectric Energy Harvesting of Periodic and Shock Excitations. Piezoelectric harvesters are capable of generating energy out of ambient vibrations. Dedicated interface circuits can significantly increase the harvesting capabilities compared with passive rectifiers. This paper presents an autonomous piezoelectric energy harvesting system in a 0.35-μm CMOS process. The implemented interface is based on the parallel-SSHI technique and can harvest from periodic a...
Digital pulse frequency modulation for switched capacitor DC-DC converter on 65nm process DC-DC converter is one of the most important building blocks in any System-on-Chip (SoC). DC-DC converter has the functional capabilities to supply various voltage levels to various loads of the chip in a way to achieve high power efficiency. Pulse Frequency Modulation is considered as the main control technique for voltage regulation of the Switched Capacitor DC-DC power converter. This paper proposes a design of a digital Pulse Frequency Modulation using Verilog-HDL and verified on 65nm low power process technology. The design includes the generation of the non-overlapping clock by the ring oscillator and the dead time circuit instead of the default clock. PFM has a total power of 7μW, area of 46.4μm2 and a slack time of 0.5ns.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
GPUWattch: enabling energy optimizations in GPGPUs General-purpose GPUs (GPGPUs) are becoming prevalent in mainstream computing, and performance per watt has emerged as a more crucial evaluation metric than peak performance. As such, GPU architects require robust tools that will enable them to quickly explore new ways to optimize GPGPUs for energy efficiency. We propose a new GPGPU power model that is configurable, capable of cycle-level calculations, and carefully validated against real hardware measurements. To achieve configurability, we use a bottom-up methodology and abstract parameters from the microarchitectural components as the model's inputs. We developed a rigorous suite of 80 microbenchmarks that we use to bound any modeling uncertainties and inaccuracies. The power model is comprehensively validated against measurements of two commercially available GPUs, and the measured error is within 9.9% and 13.4% for the two target GPUs (GTX 480 and Quadro FX5600). The model also accurately tracks the power consumption trend over time. We integrated the power model with the cycle-level simulator GPGPU-Sim and demonstrate the energy savings by utilizing dynamic voltage and frequency scaling (DVFS) and clock gating. Traditional DVFS reduces GPU energy consumption by 14.4% by leveraging within-kernel runtime variations. More finer-grained SM cluster-level DVFS improves the energy savings from 6.6% to 13.6% for those benchmarks that show clustered execution behavior. We also show that clock gating inactive lanes during divergence reduces dynamic power by 11.2%.
Self-stabilizing systems in spite of distributed control The synchronization task between loosely coupled cyclic sequential processes (as can be distinguished in, for instance, operating systems) can be viewed as keeping the relation “the system is in a legitimate state” invariant. As a result, each individual process step that could possibly cause violation of that relation has to be preceded by a test deciding whether the process in question is allowed to proceed or has to be delayed. The resulting design is readily—and quite systematically—implemented if the different processes can be granted mutually exclusive access to a common store in which “the current system state” is recorded.
Adaptive Synchronization of an Uncertain Complex Dynamical Network This brief paper further investigates the locally and globally adaptive synchronization of an uncertain complex dynamical network. Several network synchronization criteria are deduced. Especially, our hypotheses and designed adaptive controllers for network synchronization are rather simple in form. It is very useful for future practical engineering design. Moreover, numerical simulations are also given to show the effectiveness of our synchronization approaches.
Enabling open-source cognitively-controlled collaboration among software-defined radio nodes Software-defined radios (SDRs) are now recognized as a key building block for future wireless communications. We have spent the past year enhancing existing open software to create a software-defined data radio. This radio extends the notion of software-defined behavior to higher layers in the protocol stack: most importantly through the media access layer. Our particular approach to the problem has been guided by the desire to allow fine-grained cognitive control of the radio. We describe our system, Adaptive Dynamic Radio Open-source Intelligent Team (ADROIT).
Architectural Evolution of Integrated M-Phase High-Q Bandpass Filters -phase bandpass filters (BPFs) are analyzed, and variations of the structure are proposed. For values of that are integer multiples of 4, the conventional -phase BPF structure is modified to take complex baseband impedances and frequency-translate their complex impedance response to the local oscillator frequency. Also, it is demonstrated how the -phase BPF can be modified to implement a high quality factor (Q) image-rejection BPF with quadrature RF inputs. In addition, we present high-Q BPFs whose center frequencies are equal to the sum or difference of the RF and IF (intermediate frequency) clocks. Such filters can be useful in heterodyne receiver architectures.
A 10-Bit 800-MHz 19-mW CMOS ADC A pipelined ADC employs charge-steering op amps to relax the trade-offs among speed, noise, and power consumption. Applying full-rate nonlinearity and gain error calibration, a prototype realized in 65-nm CMOS technology achieves an SNDR of 52.2 dB at an input frequency of 399.2MHz and an FoM of 53 fJ/conversion-step.
A 1V 3.5 μW Bio-AFE With Chopper-Capacitor-Chopper Integrator-Based DSL and Low Power GM-C Filter This brief presents a low-noise, low-power bio-signal acquisition analog front-end (Bio-AFE). It mainly includes a capacitively coupled chopper-stabilized instrumentation amplifier (CCIA), a programmable gain amplifier (PGA), a low-pass filter (LPF), and a successive approximation analog to digital converter (SAR ADC). A chopper-capacitor-chopper integrator based DC servo loop (C3IB-DSL...
1.11
0.1
0.1
0.1
0.05
0.02
0
0
0
0
0
0
0
0
Cellular Logic-in-Memory Arrays As a direct consequence of large-scale integration, many advantages in the design, fabrication, testing, and use of digital circuitry can be achieved if the circuits can be arranged in a two-dimensional iterative, or cellular, array of identical elementary networks, or cells. When a small amount of storage is included in each cell, the same array may be regarded either as a logically enhanced memory array, or as a logic array whose elementary gates and connections can be "programmed" to realize a desired logical behavior.
GP-SIMD Processing-in-Memory GP-SIMD, a novel hybrid general-purpose SIMD computer architecture, resolves the issue of data synchronization by in-memory computing through combining data storage and massively parallel processing. GP-SIMD employs a two-dimensional access memory with modified SRAM storage cells and a bit-serial processing unit per each memory row. An analytic performance model of the GP-SIMD architecture is presented, comparing it to associative processor and to conventional SIMD architectures. Cycle-accurate simulation of four workloads supports the analytical comparison. Assuming a moderate die area, GP-SIMD architecture outperforms both the associative processor and conventional SIMD coprocessor architectures by almost an order of magnitude while consuming less power.
Evolution of Memory Architecture Computer memories continue to serve the role that they first served in the electronic discrete variable automatic computer (EDVAC) machine documented by John von Neumann, namely that of supplying instructions and operands for calculations in a timely manner. As technology has made possible significantly larger and faster machines with multiple processors, the relative distance in processor cycles ...
Rebooting the Data Access Hierarchy of Computing Systems We have been experiencing two very important movements in computing. On the one hand, a tremendous amount of resource has been invested into innovative applications such as first-principle-based methods, deep learning and cognitive computing. On the other hand, the industry has been taking a technological path where application performance and energy efficiency vary by more than two orders of magnitude depending on their parallelism, heterogeneity, and locality. We envision that a "perfect storm" is coming because of the interaction between these two movements. Many of these new and high-valued applications need to touch a very large amount of data with little data reuse and data movement has become the dominating factor for both power and performance of these applications. It will be critical to match the compute throughput to the data access bandwidth and to locate the compute near data. Much has been and continuously needs to be learned about algorithms, languages, compilers and hardware architecture in this movement. What are the killer applications that may become the new driver for future technology development? How hard is it to program existing systems to address the data movement issues today? How will we program these systems in the future? How will innovations in memory devices present further opportunities and challenges in designing new systems? What is the impact on long-term software engineering cost of applications? In this paper, we present some lessons learned as we design the IBM-Illinois C3SR (Center for Cognitive Computing Systems Research) Erudite system inside this perfect storm.
Hyper-Ap: Enhancing Associative Processing Through A Full-Stack Optimization Associative processing (AP) is a promising PIM paradigm that overcomes the von Neumann bottleneck (memory wall) by virtue of a radically different execution model. By decomposing arbitrary computations into a sequence of primitive memory operations (i.e., search and write), AP’s execution model supports concurrent SIMD computations in-situ in the memory array to eliminate the need for data movement. This execution model also provides a native support for flexible data types and only requires a minimal modification on the existing memory design (low hardware complexity). Despite these advantages, the execution model of AP has two limitations that substantially increase the execution time, i.e., 1) it can only search a single pattern in one search operation and 2) it needs to perform a write operation after each search operation. In this paper, we propose the Highly Performant Associative Processor (Hyper- AP) to fully address the aforementioned limitations. The core of Hyper- AP is an enhanced execution model that reduces the number of search and write operations needed for computations, thereby reducing the execution time. This execution model is generic and improves the performance for both CMOS-based and RRAM-based AP, but it is more beneficial for the RRAMbased AP due to the substantially reduced write operations. We then provide complete architecture and micro-architecture with several optimizations to efficiently implement Hyper-AP. In order to reduce the programming complexity, we also develop a compilation framework so that users can write C-like programs with several constraints to run applications on Hyper- AP. Several optimizations have been applied in the compilation process to exploit the unique properties of Hyper- AP. Our experimental results show that, compared with the recent work IMP, Hyper- AP achieves up to 54×/4.4× better power-/area-efficiency for various representative arithmetic operations. For the evaluated benchmarks, Hyper-AP achieves 3.3× speedup and 23.8× energy reduction on average compared with IMP. Our evaluation also confirms that the proposed execution model is more beneficial for the RRAM-based AP than its CMOS-based counterpart.
3.2 Zen: A next-generation high-performance ×86 core Codenamed “Zen”, AMD's next-generation, high-performance ×86 core targets server, desktop, and mobile client applications. Utilizing Global Foundries' energy-efficient 14nm LPP FinFET process, the 44mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> Zen core complex unit (CCX) has 1.4B transistors and contains a shared 8MB L3 cache and four cores (Fig. 3.2.7). The 7mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> Zen core contains a dedicated 0.5MB L2 cache, 32KB L1 data cache, and 64KB L1 instruction cache. Each core has a digital low drop-out (LDO) voltage regulator and digital frequency synthesizer (DFS) to independently vary frequency and voltage across power states.
ELP2IM: Efficient and Low Power Bitwise Operation Processing in DRAM Recently proposed DRAM based memory-centric architectures have demonstrated their great potentials in addressing the memory wall challenge of modern computing systems. Such architectures exploit charge sharing of multiple rows to enable in-memory bitwise operations. However, existing designs rely heavily on reserved rows to implement computation, which introduces high data movement overhead, large operation latency, large energy consumption, and low operation reliability. In this paper, we propose ELP <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> IM, an efficient and low power processing in-memory architecture, to address the above issues. ELP <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> IM utilizes two stable states of sense amplifiers in DRAM subarrays so that it can effectively reduce the number of intra-subarray data movements as well as the number of concurrently opened DRAM rows, which exhibits great performance and energy consumption advantages over existing designs. Our experimental results show that the power efficiency of ELP <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> IM is more than 2× improvement over the state-of-the-art DRAM based memory-centric designs in real application.
NDC: Analyzing the impact of 3D-stacked memory+logic devices on MapReduce workloads While Processing-in-Memory has been investigated for decades, it has not been embraced commercially. A number of emerging technologies have renewed interest in this topic. In particular, the emergence of 3D stacking and the imminent release of Micron's Hybrid Memory Cube device have made it more practical to move computation near memory. However, the literature is missing a detailed analysis of a killer application that can leverage a Near Data Computing (NDC) architecture. This paper focuses on in-memory MapReduce workloads that are commercially important and are especially suitable for NDC because of their embarrassing parallelism and largely localized memory accesses. The NDC architecture incorporates several simple processing cores on a separate, non-memory die in a 3D-stacked memory package; these cores can perform Map operations with efficient memory access and without hitting the bandwidth wall. This paper describes and evaluates a number of key elements necessary in realizing efficient NDC operation: (i) low-EPI cores, (ii) long daisy chains of memory devices, (iii) the dynamic activation of cores and SerDes links. Compared to a baseline that is heavily optimized for MapReduce execution, the NDC design yields up to 15X reduction in execution time and 18X reduction in system energy.
Column-oriented database systems Column-oriented database systems (column-stores) have attracted a lot of attention in the past few years. Column-stores, in a nutshell, store each database table column separately, with attribute values belonging to the same column stored contiguously, compressed, and densely packed, as opposed to traditional database systems that store entire records (rows) one after the other. Reading a subset of a table's columns becomes faster, at the potential expense of excessive disk-head seeking from column to column for scattered reads or updates. After several dozens of research papers and at least a dozen of new column-store start-ups, several questions remain. Are these a new breed of systems or simply old wine in new bottles? How easily can a major row-based system achieve column-store performance? Are column-stores the answer to effortlessly support large-scale data-intensive applications? What are the new, exciting system research problems to tackle? What are the new applications that can be potentially enabled by column-stores? In this tutorial, we present an overview of column-oriented database system technology and address these and other related questions.
Fused-layer CNN accelerators. Deep convolutional neural networks (CNNs) are rapidly becoming the dominant approach to computer vision and a major component of many other pervasive machine learning tasks, such as speech recognition, natural language processing, and fraud detection. As a result, accelerators for efficiently evaluating CNNs are rapidly growing in popularity. The conventional approaches to designing such CNN accelerators is to focus on creating accelerators to iteratively process the CNN layers. However, by processing each layer to completion, the accelerator designs must use off-chip memory to store intermediate data between layers, because the intermediate data are too large to fit on chip. In this work, we observe that a previously unexplored dimension exists in the design space of CNN accelerators that focuses on the dataflow across convolutional layers. We find that we are able to fuse the processing of multiple CNN layers by modifying the order in which the input data are brought on chip, enabling caching of intermediate data between the evaluation of adjacent CNN layers. We demonstrate the effectiveness of our approach by constructing a fused-layer CNN accelerator for the first five convolutional layers of the VGGNet-E network and comparing it to the state-of-the-art accelerator implemented on a Xilinx Virtex-7 FPGA. We find that, by using 362KB of on-chip storage, our fused-layer accelerator minimizes off-chip feature map data transfer, reducing the total transfer by 95%, from 77MB down to 3.6MB per image.
The Case for Wireless Overlay Networks Wireless data services, other than those for electronic mail or paging, have thus far been more of a promise than a success. We believe that future mobile information systems must be built upon heterogeneous wireless overlay networks, extending traditional wired and internetworked processing “islands” to hosts on the move over coverage areas ranging from in-room, in-building, campus, metropolitan, and wide-areas. Unfortunately, network planners continue to think in terms of homogeneous wireless communications systems and technologies. In this paper, we describe our approach towards a new wireless data networking architecture that integrates diverse wireless technologies into a seamless wireless (and wireline) internetwork. In addition, we describe the applications support services needed to make it possible for applications to continue to operate as mobile hosts roam across such networks.
A 12-GS/s Phase-Calibrated CMOS Digital-to-Analog Converter for Backplane Communications A 12-GS/s 8-bit digital-to-analog converter (DAC) enables 24 Gb/s signaling over conventional backplane channels. Designed in a 90-nm CMOS process, the circuit occupies an area of 670 mum times 350 mum and achieves INL and DNL of 0.31 and 0.28LSB, respectively. Measured SNDR and SFDR are 41 dB and 51 dB at 750 MHz and 32.5 dB and 35 dB at 1.5 GHz. Measured SDR and 3-dB bandwidth using 12 GS/s random data are 32 dB and 7.1 GHz, respectively. The power dissipation is 190 mW from 1-V and 1.8-V power supplies.
Reconfigurable cognitive transceiver for opportunistic networks. In this work, we provide the implementation and analysis of a cognitive transceiver for opportunistic networks. We focus on a previously introduced dynamic spectrum access (DSA) - cognitive radio (CR) solution for primary-secondary coexistence in opportunistic orthogonal frequency division multiplexing (OFDM) networks, called cognitive interference alignment (CIA). The implementation is based on software-defined radio (SDR) and uses GNU Radio and the universal software radio peripheral (USRP) as the implementation toolkit. The proposed flexible transceiver architecture allows efficient on-the-fly reconfigurations of the physical layer into OFDM, CIA or a combination of both. Remarkably, its responsiveness is such that the uplink and downlink channel reciprocity from the medium perspective, inherent to time division duplex (TDD) communications, can be effectively verified and exploited. We show that CIA provides approximately 10 dB of interference isolation towards the OFDM receiver with respect to a fully random precoder. This result is obtained under suboptimal conditions, which indicates that further gains are possible with a better optimization of the system. Our findings point towards the usefulness of a practical CIA implementation, as it yields a non-negligible performance for the secondary system, while providing interference shielding to the primary receiver.
An Energy-Efficient SAR ADC With Event-Triggered Error Correction This brief presents an energy-efficient fully differential 10-bit successive approximation register (SAR) analog-to-digital converter (ADC) with a resolution of 10 bits and a sampling rate of 320 kS/s. The optimal capacitor split and bypass number is analyzed to achieve the highest switching energy efficiency. The common-mode voltage level remains constant during the MSB-capacitor switching cycles. To minimize nonlinearity due to charge averaging voltage offset or DAC array mismatch, an event-triggered error correction method is employed as a redundant cycle for detecting digital code errors within 1 least significant bit (LSB). A test chip was fabricated using the 180-nm CMOS process and occupied a 0.0564-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> core area. Under a regular 0.65-V supply voltage, the ADC achieved an effective number of bits of 9.61 bits and a figure of merit (FOM) of 6.38 fJ/conversion step, with 1.6- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${ \mu }\text{W}$ </tex-math></inline-formula> power dissipation for a low-frequency input. The measured differential and integral nonlinearity results are within 0.30 LSB and 0.43 LSB, respectively.
1.029437
0.028571
0.028571
0.028571
0.028571
0.028571
0.020408
0.006466
0.000018
0
0
0
0
0
Measuring the Gap Between FPGAs and ASICs ABSTRACT This paper presents experimental measurements of the differences between a 90nm CMOS FPGA and 90nm CMOS Standard Cell ASICs in terms of logic density, circuit speed and power consumption. We are motivated to make these measurements to enable system designers to make better informed choices between these two media and to give insight to FPGA makers on the deciencies to attack and thereby improve FPGAs. In the paper, we describe the methodology by which the measurements were obtained and we show that, for circuits containing only combinational logic and,ipops, the ratio of silicon area required to implement them in FPGAs and ASICs is on average 40. Modern FPGAs also contain \hard" blocks such as multiplier/accumulators and block memories,and we nd,that these blocks reduce this average area gap signican tly to as little as 21. The ratio of critical path delay, from FPGA to ASIC, is roughly 3 to 4, with less inuence from block memory and hard multipliers. The dynamic power consumption ratio is approximately 12 times and, with hard blocks, this gap generally becomes smaller. Categories and Subject Descriptors
Reconfigurable Radios: A Possible Solution to Reduce Entry Costs in Wireless Phones With advances in telecommunications, an increasing number of services rely on high data rate spectrum access. These critical services include banking, telemedicine, and exchange of technical information. As a result, spectrum resources are in ever-greater demand and the radio spectrum has become overly crowded. For efficient usage of spectrum, smart or cognitive radios are sought after. However, current wireless phones can only select a few specific bands. In this paper, we discuss the advantages of reconfigurable radios in not only increasing the efficiency of spectrum usage but also in potentially reducing the cost of wireless handsets and the barriers for new wireless service providers to enter the market. We review available technologies that make the implementation of reconfigurable radios possible and discuss technical challenges that need to be overcome before multistandard reconfigurable radios are put into practice. We also evaluate the ability of reconfigurable radios in reducing entry costs for new competitors in wireless service.
A detailed power model for field-programmable gate arrays Power has become a critical issue for field-programmable gate array (FPGA) vendors. Understanding the power dissipation within FPGAs is the first step in developing power-efficient architectures and computer-aided design (CAD) tools for FPGAs. This article describes a detailed and flexible power model which has been integrated in the widely used Versatile Place and Route (VPR) CAD tool. This power model estimates the dynamic, short-circuit, and leakage power consumed by FPGAs. It is the first flexible power model developed to evaluate architectural tradeoffs and the efficiency of power-aware CAD tools for a variety of FPGA architectures, and is freely available for noncommercial use. The model is flexible, in that it can estimate the power for a wide variety of FPGA architectures, and it is fast, in that it does not require extensive simulation, meaning it can be used to explore a large architectural space. We show how the model can be used to investigate the impact of various architectural parameters on the energy consumed by the FPGA, focusing on the segment length, switch block topology, lookuptable size, and cluster size.
Hardware acceleration of database operations As the amount of memory in database systems grows, entire database tables, or even databases, are able to fit in the system's memory, making in-memory database operations more prevalent. This shift from disk-based to in-memory database systems has contributed to a move from row-wise to columnar data storage. Furthermore, common database workloads have grown beyond online transaction processing (OLTP) to include online analytical processing and data mining. These workloads analyze huge datasets that are often irregular and not indexed, making traditional database operations like joins much more expensive. In this paper we explore using dedicated hardware to accelerate in-memory database operations. We present hardware to accelerate the selection process of compacting a single column into a linear column of selected data, joining two sorted columns via merging, and sorting a column. Finally, we put these primitives together to accelerate an entire join operation. We implement a prototype of this system using FPGAs and show substantial improvements in both absolute throughput and utilization of memory bandwidth. Using the prototype as a guide, we explore how the hardware resources required by our design change with the desired throughput.
Sparse matrix multiplication: The distributed block-compressed sparse row library. •An implementation of a sparse matrix–matrix multiplication library is described.•The library was developed to support linear-scaling quantum simulations.•Performance is high for a variety of matrix sparsities.•We show that the library scales to tens of thousands of processor cores.
Architecture Design of Reconfigurable Pipelined Datapaths This paper examines reconfigurable pipelined datapaths (RaPiDs), a new architecture style for computation-intensive applications that bridges the cost/performance gap between general purpose and application specific architectures. RaPiDs can provide significantly higher performance than general purpose processors on a wide range of applications from the areas of video and signal processing, scientific computing, and communications. Moreover, RaPiDs provide the flexibility that doesn't come with application-specific architectures.A RaPiD architecture is optimized for highly repetitive, computationally-intensive tasks. Very deep application-specific computation pipelines can be configured that deliver very high performance for a wide range of applications. RaPiDs achieve this using a coarse-grained reconfigurable architecture that mixes the appropriate amount of static configuration with dynamic control.We describe the fundamental features of a RaPiD architecture, including the linear array of functional units, a programmable segmented bus structure, and a programmable control architecture. In addition, we outline the floorplan of the architecture and provide timing data for the most critical paths. We conclude with performance numbers for several applications on an instance of a RaPiD architecture.
The CORDIC Trigonometric Computing Technique The COordinate Rotation DIgital Computer(CORDIC) is a special-purpose digital computer for real-time airborne computation. In this computer, a unique computing technique is employed which is especially suitable for solving the trigonometric relationships involved in plane coordinate rotation and conversion from rectangular to polar coordinates. CORDIC is an entire-transfer computer; it contains a special serial arithmetic unit consisting of three shift registers, three adder-subtractors, and special interconnections. By use of a prescribed sequence of conditional additions or subtractions, the CORDIC arithmetic unit can be controlled to solve either set of the following equations: Y' = K(Y cos¿ + X sin¿) X' = K(X cos¿ - Y sin¿), or R = K¿X2 + Y2 ¿ = tan-1 Y/X, where K is an invariable constant. This special arithmetic unit is also suitable for other computations such as multiplication, division, and the conversion between binary and mixed radix number systems. However, only the trigonometric algorithms used in this computer and the instrumentation of these algorithms are discussed in this paper.
MAERI: Enabling Flexible Dataflow Mapping over DNN Accelerators via Reconfigurable Interconnects. Deep neural networks (DNN) have demonstrated highly promising results across computer vision and speech recognition, and are becoming foundational for ubiquitous AI. The computational complexity of these algorithms and a need for high energy-efficiency has led to a surge in research on hardware accelerators. % for this paradigm. To reduce the latency and energy costs of accessing DRAM, most DNN accelerators are spatial in nature, with hundreds of processing elements (PE) operating in parallel and communicating with each other directly. DNNs are evolving at a rapid rate, and it is common to have convolution, recurrent, pooling, and fully-connected layers with varying input and filter sizes in the most recent topologies.They may be dense or sparse. They can also be partitioned in myriad ways (within and across layers) to exploit data reuse (weights and intermediate outputs). All of the above can lead to different dataflow patterns within the accelerator substrate. Unfortunately, most DNN accelerators support only fixed dataflow patterns internally as they perform a careful co-design of the PEs and the network-on-chip (NoC). In fact, the majority of them are only optimized for traffic within a convolutional layer. This makes it challenging to map arbitrary dataflows on the fabric efficiently, and can lead to underutilization of the available compute resources. DNN accelerators need to be programmable to enable mass deployment. For them to be programmable, they need to be configurable internally to support the various dataflow patterns that could be mapped over them. To address this need, we present MAERI, which is a DNN accelerator built with a set of modular and configurable building blocks that can easily support myriad DNN partitions and mappings by appropriately configuring tiny switches. MAERI provides 8-459% better utilization across multiple dataflow mappings over baselines with rigid NoC fabrics.
BSSync: Processing Near Memory for Machine Learning Workloads with Bounded Staleness Consistency Models. Parallel machine learning workloads have become prevalent in numerous application domains. Many of these workloads are iterative convergent, allowing different threads to compute in an asynchronous manner, relaxing certain read-after-write data dependencies to use stale values. While considerable effort has been devoted to reducing the communication latency between nodes by utilizing asynchronous parallelism, inefficient utilization of relaxed consistency models within a single node have caused parallel implementations to have low execution efficiency. The long latency and serialization caused by atomic operations have a significant impact on performance. The data communication is not overlapped with the main computation, which reduces execution efficiency. The inefficiency comes from the data movement between where they are stored and where they are processed. In this work, we propose Bounded Staled Sync (BSSync), a hardware support for the bounded staleness consistency model, which accompanies simple logic layers in the memory hierarchy. BSSync overlaps the long latency atomic operation with the main computation, targeting iterative convergent machine learning workloads. Compared to previous work that allows staleness for read operations, BSSync utilizes staleness for write operations, allowing stale-writes. We demonstrate the benefit of the proposed scheme for representative machine learning workloads. On average, our approach outperforms the baseline asynchronous parallel implementation by 1.33x times.
Memory safety without garbage collection for embedded applications Traditional approaches to enforcing memory safety of programs rely heavily on run-time checks of memory accesses and on garbage collection, both of which are unattractive for embedded applications. The goal of our work is to develop advanced compiler techniques for enforcing memory safety with minimal run-time overheads. In this paper, we describe a set of compiler techniques that, together with minor semantic restrictions on C programs and no new syntax, ensure memory safety and provide most of the error-detection capabilities of type-safe languages, without using garbage collection, and with no run-time software checks, (on systems with standard hardware support for memory management). The language permits arbitrary pointer-based data structures, explicit deallocation of dynamically allocated memory, and restricted array operations. One of the key results of this paper is a compiler technique that ensures that dereferencing dangling pointers to freed memory does not violate memory safety, without annotations, run-time checks, or garbage collection, and works for arbitrary type-safe C programs. Furthermore, we present a new interprocedural analysis for static array bounds checking under certain assumptions. For a diverse set of embedded C programs, we show that we are able to ensure memory safety of pointer and dynamic memory usage in all these programs with no run-time software checks (on systems with standard hardware memory protection), requiring only minor restructuring to conform to simple type restrictions. Static array bounds checking fails for roughly half the programs we study due to complex array references, and these are the only cases where explicit run-time software checks would be needed under our language and system assumptions.
On the Artifacts of Random Waypoint Simulations
An analytical study of a structured overlay in the presence of dynamic membership In this paper, we present an analytical study of dynamic membership (aka churn) in structured peer-to-peer networks. We use a fluid model approach to describe steady-state or transient phenomena and apply it to the Chord system. For any rate of churn and stabilization rates and any system size, we accurately account for the functional form of the probability of network disconnection as well as the fraction of failed or incorrect successor and finger pointers. We show how we can use these quantities to predict both the performance and consistency of lookups under churn. All theoretical predictions match simulation results. The analysis includes both features that are generic to structured overlays deploying a ring as well as Chord-specific details and opens the door to a systematic comparative analysis of, at least, ring-based structured overlay systems under churn.
Lossy data compression using FDCT for haptic communication In this paper, a DCT-based lossy haptic data compression method for a haptic communication systems is proposed to reduce the data size flowing between a master and a slave system. The calculation load for the DCT can be high, and the performance and the stability of the system can deteriorate due to the high calculation load. In order to keep the system a hard real-time system and the performance high, a fast calculation algorithm for DCT is adopted, and the calculation load is balanced for several sampling periods. The time delay introduced through the compression/expansion of the haptic data is predictable and constant. The time delay, therefore, can be compensated by a time delay compensator. Furthermore, since the delay in this paper is small enough, stable contact with a hard environment is achieved without any time delay compensator. The validity of the proposed lossy haptic data compression method is shown through simulation and experimental results.
A VCO-Based Nonuniform Sampling ADC Using a Slope-Dependent Pulse Generator This paper presents a voltage-controlled oscillator (VCO)-based nonuniform sampling analog-to-digital converter (ADC) as an alternative to the level-crossing (LC)-based converters for digitizing biopotential signals. This work aims to provide a good signal-to-noise-and-distortion ratio at a low average sampling rate. In the proposed conversion method, a slope-dependent pulse generation block is used to provide a variable sample rate adjusted according to the input signal's slope. Simulation results show that the introduced method meets a target reconstruction quality with a sampling rate approaching 92 Sps, while on the same MIT-BIH Arrhythmia N 106 ECG benchmark, the classic LC-based approach requires a sampling rate higher than 500 Sps. The benefits of the proposed method are more remarkable when the input signal is very noisy. The proposed ADC achieves a compression ratio close to 4, but with only 5.4% root-mean-square difference when tested using the MIT-BIH Arrhythmia Database.
1.026868
0.030575
0.025728
0.025728
0.025
0.017638
0.008333
0.002722
0.000234
0
0
0
0
0
TrueNorth: Design and Tool Flow of a 65 mW 1 Million Neuron Programmable Neurosynaptic Chip The new era of cognitive computing brings forth the grand challenge of developing systems capable of processing massive amounts of noisy multisensory data. This type of intelligent computing poses a set of constraints, including real-time operation, low-power consumption and scalability, which require a radical departure from conventional system design. Brain-inspired architectures offer tremendou...
Spiking Neural Networks Hardware Implementations and Challenges: A Survey Neuromorphic computing is henceforth a major research field for both academic and industrial actors. As opposed to Von Neumann machines, brain-inspired processors aim at bringing closer the memory and the computational elements to efficiently evaluate machine learning algorithms. Recently, spiking neural networks, a generation of cognitive algorithms employing computational primitives mimicking neuron and synapse operational principles, have become an important part of deep learning. They are expected to improve the computational performance and efficiency of neural networks, but they are best suited for hardware able to support their temporal dynamics. In this survey, we present the state of the art of hardware implementations of spiking neural networks and the current trends in algorithm elaboration from model selection to training mechanisms. The scope of existing solutions is extensive; we thus present the general framework and study on a case-by-case basis the relevant particularities. We describe the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level and discuss their related advantages and challenges.
A 65-nm Neuromorphic Image Classification Processor With Energy-Efficient Training Through Direct Spike-Only Feedback Recent advances in neural network (NN) and machine learning algorithms have sparked a wide array of research in specialized hardware, ranging from high-performance NN accelerators for use inside the server systems to energy-efficient edge computing systems. While most of these studies have focused on designing inference engines, implementing the training process of an NN for energy-constrained mobile devices has remained to be a challenge due to the requirement of higher numerical precision. In this article, we aim to build an on-chip learning system that would show highly energy-efficient training for NNs without degradation in the performance for machine learning tasks. To achieve this goal, we adapt and optimize a neuromorphic learning algorithm and propose hardware design techniques to fully exploit the properties of the modifications. We verify that our system achieves energy-efficient training with only 7.5% more energy consumption compared with its highly efficient inference of 236 nJ/image on the handwritten digit [Modified National Institute of Standards and Technology database (MNIST)] images. Moreover, our system achieves 97.83% classification accuracy on the MNIST test data set, which outperforms prior neuromorphic on-chip learning systems and is close to the performance of the conventional method for training deep neural networks (NNs), the backpropagation.
Energy efficient parallel neuromorphic architectures with approximate arithmetic on FPGA. In this paper, we present the parallel neuromorphic processor architectures for spiking neural networks on FPGA. The proposed architectures address several critical issues pertaining to efficient parallelization of the update of membrane potentials, on-chip storage of synaptic weights and integration of approximate arithmetic units. The trade-offs between throughput, hardware cost and power overheads for different configurations are thoroughly investigated. Notably, for the application of handwritten digit recognition, a promising training speedup of 13.5x and a recognition speedup of 25.8x are achieved by a parallel implementation whose degree of parallelism is 32. In spite of the 120MHz operating frequency, the 32-way parallel hardware design demonstrates a 59.4x training speedup over the single-thread software program running on a 2.2GHz general purpose CPU. Equally importantly, by leveraging the built-in resilience of the neuromorphic architecture we demonstrate the energy benefit resulted from the use of approximate arithmetic computation. Up to 20% improvement in energy consumption is achieved by integrating approximate multipliers into the system while maintaining almost the same level of recognition rate achieved using standard multipliers. To the best of our knowledge, it is the first time that the approximate computing and parallel processing are applied to FPGA based spiking neural networks. The influence of the parallel processing on the benefits of approximate computing is also discussed in detail.
Presynaptic Spike-Driven Spike Timing-Dependent Plasticity With Address Event Representation for Large-Scale Neuromorphic Systems Learning plays an important role in the brain to make it adaptive to dynamical environments. This paper presents a presynaptic spike-driven spike timing-dependent plasticity (STDP) learning rule in the address domain for a neuromorphic architecture using a synaptic connectivity table in an external memory at a local routing node. We contribute two aspects to the implementation of the learning rule...
First-Spike-Based Visual Categorization Using Reward-Modulated STDP. Reinforcement learning (RL) has recently regained popularity with major achievements such as beating the European game of Go champion. Here, for the first time, we show that RL can be used efficiently to train a spiking neural network (SNN) to perform object recognition in natural images without using an external classifier. We used a feedforward convolutional SNN and a temporal coding scheme wher...
DianNao family: energy-efficient hardware accelerators for machine learning. Machine Learning (ML) tasks are becoming pervasive in a broad range of applications, and in a broad range of systems (from embedded systems to data centers). As computer architectures evolve toward heterogeneous multi-cores composed of a mix of cores and hardware accelerators, designing hardware accelerators for ML techniques can simultaneously achieve high efficiency and broad application scope. While efficient computational primitives are important for a hardware accelerator, inefficient memory transfers can potentially void the throughput, energy, or cost advantages of accelerators, that is, an Amdahl's law effect, and thus, they should become a first-order concern, just like in processors, rather than an element factored in accelerator design on a second step. In this article, we introduce a series of hardware accelerators (i.e., the DianNao family) designed for ML (especially neural networks), with a special emphasis on the impact of memory on accelerator design, performance, and energy. We show that, on a number of representative neural network layers, it is possible to achieve a speedup of 450.65x over a GPU, and reduce the energy by 150.31x on average for a 64-chip DaDianNao system (a member of the DianNao family).<!-- END_PAGE_1 -->
TETRIS: Scalable and Efficient Neural Network Acceleration with 3D Memory. The high accuracy of deep neural networks (NNs) has led to the development of NN accelerators that improve performance by two orders of magnitude. However, scaling these accelerators for higher performance with increasingly larger NNs exacerbates the cost and energy overheads of their memory systems, including the on-chip SRAM buffers and the off-chip DRAM channels. This paper presents the hardware architecture and software scheduling and partitioning techniques for TETRIS, a scalable NN accelerator using 3D memory. First, we show that the high throughput and low energy characteristics of 3D memory allow us to rebalance the NN accelerator design, using more area for processing elements and less area for SRAM buffers. Second, we move portions of the NN computations close to the DRAM banks to decrease bandwidth pressure and increase performance and energy efficiency. Third, we show that despite the use of small SRAM buffers, the presence of 3D memory simplifies dataflow scheduling for NN computations. We present an analytical scheduling scheme that matches the efficiency of schedules derived through exhaustive search. Finally, we develop a hybrid partitioning scheme that parallelizes the NN computations over multiple accelerators. Overall, we show that TETRIS improves mthe performance by 4.1x and reduces the energy by 1.5x over NN accelerators with conventional, low-power DRAM memory systems.
Max-Min D-Cluster Formation in Wireless Ad Hoc Networks An ad hoc network may be logically represented as a set of clusters. The clusterheads form a -hop dominating set. Each node is at most hops from a clusterhead. Clusterheads form a virtual backbone and may be used to route packets for nodes in their cluster. Previous heuristics restricted themselves to -hop clusters. We show that the minimum -hop dominating set problem is NP-complete. Then we present a heuristic to form -clusters in a wireless ad hoc network. Nodes are assumed to have non-deterministic mobility pattern. Clusters are formed by diffusing node identities along the wireless links. When the heuristic terminates, a node either becomes a clusterhead, or is at most wireless hops away from its clusterhead. The value of is a parameter of the heuristic. The heuristic can be run either at regular intervals, or whenever the network configura- tion changes. One of the features of the heuristic is that it t ends to re-elect existing clusterheads even when the network configuration c hanges. This helps to reduce the communication overheads during transition from old clusterheads to new clusterheads. Also, there is a tendency to evenly dis- tribute the mobile nodes among the clusterheads, and evently distribute the responsibility of acting as clusterheads among all nodes. Thus, the heuristic is fair and stable. Simulation experiments demonstrate that the proposed heuristic is better than the two earlier heuristics, namely the LCA (1) and Degree based (11) solutions.
Class-C Harmonic CMOS VCOs, With a General Result on Phase Noise A harmonic oscillator topology displaying an improved phase noise performance is introduced in this paper. Exploiting the advantages yielded by operating the core transistors in class-C, a theoretical 3.9 dB phase noise improvement compared to the standard differential-pair LC-tank oscillator is achieved for the same current consumption. Further benefits derive from the natural rejection of the tail bias current noise, and from the absence of parasitic nodes sensitive to stray capacitances. Closed-form phase-noise equations obtained from a rigorous time-variant circuit analysis are presented, as well as a time-variant study of the stability of the oscillation amplitude, resulting in simple guidelines for a reliable design. Furthermore, the analysis of phase noise is extended to encompass a general harmonic oscillator, showing that all phase noise relations previously obtained for specific LC oscillator topologies are special cases of a very general and remarkably simple result.
Low-power area-efficient high-speed I/O circuit techniques We present a 4-Gb/s I/O circuit that fits in 0.1-mm/sup 2/ of die area, dissipates 90 mW of power, and operates over 1 m of 7-mil 0.5-oz PCB trace in a 0.25-/spl mu/m CMOS technology. Swing reduction is used in an input-multiplexed transmitter to provide most of the speed advantage of an output-multiplexed architecture with significantly lower power and area. A delay-locked loop (DLL) using a supply-regulated inverter delay line gives very low jitter at a fraction of the power of a source-coupled delay line-based DLL. Receiver capacitive offset trimming decreases the minimum resolvable swing to 8 mV, greatly reducing the transmission energy without affecting the performance of the receive amplifier. These circuit techniques enable a high level of I/O integration to relieve the pin bandwidth bottleneck of modern VLSI chips.
Nonlinear semidefinite programming: sensitivity, convergence, and an application in passive reduced-order modeling We consider the solution of nonlinear programs with nonlinear semidefiniteness constraints. The need for an efficient exploitation of the cone of positive semidefinite matrices makes the solution of such nonlinear semidefinite programs more complicated than the solution of standard nonlinear programs. This paper studies a sequential semidefinite programming (SSP) method, which is a generalization of the well-known sequential quadratic programming method for standard nonlinear programs. We present a sensitivity result for nonlinear semidefinite programs, and then based on this result, we give a self-contained proof of local quadratic convergence of the SSP method. We also describe a class of nonlinear semidefinite programs that arise in passive reduced-order modeling, and we report results of some numerical experiments with the SSP method applied to problems in that class.
Automated text mining for requirements analysis of policy documents Businesses and organizations in jurisdictions around the world are required by law to provide their customers and users with information about their business practices in the form of policy documents. Requirements engineers analyze these documents as sources of requirements, but this analysis is a time-consuming and mostly manual process. Moreover, policy documents contain legalese and present readability challenges to requirements engineers seeking to analyze them. In this paper, we perform a large-scale analysis of 2,061 policy documents, including policy documents from the Google Top 1000 most visited websites and the Fortune 500 companies, for three purposes: (1) to assess the readability of these policy documents for requirements engineers; (2) to determine if automated text mining can indicate whether a policy document contains requirements expressed as either privacy protections or vulnerabilities; and (3) to establish the generalizability of prior work in the identification of privacy protections and vulnerabilities from privacy policies to other policy documents. Our results suggest that this requirements analysis technique, developed on a small set of policy documents in two domains, may generalize to other domains.
Neuropixels Data-Acquisition System: A Scalable Platform for Parallel Recording of 10,000+ Electrophysiological Signals. Although CMOS fabrication has enabled a quick evolution in the design of high-density neural probes and neural-recording chips, the scaling and miniaturization of the complete data-acquisition systems has happened at a slower pace. This is mainly due to the complexity and the many requirements that change depending on the specific experimental settings. In essence, the fundamental challenge of a n...
1.0432
0.04
0.04
0.04
0.04
0.02
0.008
0.000606
0
0
0
0
0
0
DHT-based Edge and Fog Computing Systems: Infrastructures and Applications Intending to support new emerging applications with latency requirements below what can be offered by the cloud data centers, the edge and fog computing paradigms have reared. In such systems, the real-time instant data is processed closer to the edge of the network, instead of the remote data centers. With the advances in edge and fog computing systems, novel and efficient solutions based on Distributed Hash Tables (DHTs) emerge and play critical roles in system design. Several DHT-based solutions have been proposed to either augment the scalability and efficiency of edge and fog computing infrastructures or to enable application-specific functionalities such as task and resource management. This paper presents the first comprehensive study on the state-of-the-art DHT-based architectures in edge and fog computing systems from the lenses of infrastructure and application. Moreover, the paper details the open problems and discusses future research guidelines for the DHT-based edge and fog computing systems.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Distributed exploration of dynamic rings In the graph exploration problem, a team of mobile computational entities, called agents, arbitrarily positioned at some nodes of a graph, must cooperate so that each node is eventually visited by at least one agent. In the literature, the main focus has been on graphs that are static; that is, the topology is either invariant in time or subject to localized changes. The few studies on exploration of dynamic graphs have been almost all limited to the centralized case (i.e., assuming complete a priori knowledge of the changes and the times of their occurrence). We investigate the decentralized exploration of dynamic graphs (i.e., when the agents are unaware of the location and timing of the changes) focusing, in this paper, on dynamic systems whose underlying graph is a ring. We first consider the fully-synchronous systems traditionally assumed in the literature; i.e., all agents are active at each time step. We then introduce the notion of semi-synchronous systems, where only a subset of agents might be active at each time step (the choice of the subset is made by an adversary); this model is common in the context of mobile agents in continuous spaces but has never been studied before for agents moving in graphs. Our main focus is on the impact that the level of synchrony as well as other factors such as anonymity, knowledge of the size of the ring, and chirality (i.e., common orientation) have on the solvability of the problem, focusing on the minimum number of agents necessary. We draw an extensive map of feasibility, and of complexity in terms of minimum number of agent movements. All our sufficiency proofs are constructive, and almost all our solution protocols are asymptotically optimal.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
An Energy Autonomous 400 MHz Active Wireless SAW Temperature Sensor Powered by Vibration Energy Harvesting An energy autonomous active wireless surface acoustic wave (SAW) temperature sensor system is presented in this paper. The proposed system adopts direct temperature to frequency conversion using a lithium niobate SAW resonator for both temperature sensing and high-Q resonator core in a cross-coupled RF oscillator. This arrangement simplifies the temperature sensor readout circuit design and reduces the overall system power consumption. A power conditioning circuit based on buck-boost converter is utilized to provide high efficiency power extraction from piezoelectric energy harvester (PEH) and dynamic system power control. The SAW resonator is fabricated in-house using a two-step lithography procedure while the RF oscillator as well as the PEH power conditioning circuit are implemented in standard 65-nm and 0.18- μm CMOS processes respectively. The measured RF transmitter output power is -15 dBm with a phase noise of -99.4 dBc/Hz at 1 kHz offset, achieving a figure of merit (FOM) of -217.6 dB. The measured temperature sensing accuracy is ±0.6 °C in -40 °C to 120 °C range. Fully powered by a vibration PEH, the proposed energy autonomous system has a self-startup voltage of 0.7 V and consumes an average power of 61.5 μW.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
SIF-NPU: A 28nm 3.48 TOPS/W 0.25 TOPS/mm<sup>2</sup> CNN Accelerator with Spatially Independent Fusion for Real-Time UHD Super-Resolution This paper proposes a convolutional neural network (CNN)-based super-resolution accelerator for up-scaling to ultra-HD (UHD) resolution in real-time in edge devices. A novel error-compensated bit quantization is adopted to reduce bit depth in the SR task. Spatially independent layer fusion is exploited to satisfy high throughput requirements at UHD resolution by increasing parallelism. Burst operation with write mask in the dual-port SRAM increases the process element utilization by allowing the concurrent multi-access without exploiting additional memory. The accelerator is implemented in the 28nm technology and shows at least 4.3 times higher <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">$\text{FoM}(\text{TOPS}/\text{mm}^{2}\times \text{TOPS/W)}$</tex> of 0.87 than the state-of-art CNN accelerators. The implemented accelerator supports up-scaling up to 96 frames-per-seconds in UHD resolution.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
An all-digital clock generator using a fractionally injection-locked oscillator in 65nm CMOS Injection locking is an effective method to reduce the jitter of clock generators especially for a ring oscillator-based PLL that has poor phase noise. While the use of injection locking reduces the output jitter, one disadvantage is that the output frequency can be changed only by integer multiples of the reference frequency, if it can be changed at all. In this work, an ADPLL-based clock generator is presented that employs a fractional-injection-locking method that exploits the multiphase output of a ring oscillator. The clock generator achieves an average of 4.23 psrms jitter and a frequency resolution of 1MHz while using a reference clock of 32MHz.
A Spur-and-Phase-Noise-Filtering Technique for Inductor-Less Fractional-N Injection-Locked PLLs. A novel phase-noise-filtering technique based on phase-domain averaging is proposed to suppress the large injection spurs and poor high-frequency phase noise of inductor-less injection-locked phase-locked loops (IL-PLLs). Demonstrated using a 1.2-GHz fractional-N IL-PLL based on a capacitive-ring-coupled ring oscillator, wideband spur-and-phase-noise suppression of up to 20 dB is achieved allowing...
A 2.4GHz sub-harmonically injection-locked PLL with self-calibrated injection timing A low-phase-noise integer-N phase-locked loop (PLL) is attractive in many applications, such as clock generation and analog-to-digital conversion. The sub-harmonically injection-locked technique, sub-sampling technique, and the multiplying delay-locked loop (MDLL) can significantly improve the phase noise of an integer-N PLL. In the sub-harmonically injection-locked technique, to inject a low-frequency reference clock into a high-frequency voltage-controlled oscillator (VCO), the injection timing should be tightly controlled. If the injection timing varies due to process variation, it may cause a large reference spur or even cause the PLL to fails to lock. A sub-harmonically injection-locked PLL (SILPLL) adopts a sub-sampling phase-detector (PD) to automatically align the phase between the injection pulse and a VCO. However, a sub-sampling PD has a small capture range and a low bandwidth. The high-frequency non-linear effects of a sub-sampling PD may degrade the accuracy and limit the maximum speed of a VCO. In addition, a frequency-locked loop is needed for a sub-sampling PD. A delay line is manually adjusted to achieve the correct injection timing. However, the delay line is sensitive to process variations. Thus, the injection timing should be calibrated.
21.1 A 1.7GHz MDLL-based fractional-N frequency synthesizer with 1.4ps RMS integrated jitter and 3mW power using a 1b TDC The introduction of inductorless frequency synthesizers into standardized wireless systems still requires a high level of innovation in order to achieve the stringent requirements of low noise and low power consumption. Synthesizers based on the so-called multiplying delay-locked loop (MDLL) represent one of the most promising architectures in this direction [1-3]. An MDLL resembles a ring oscillator, in which the signal edge traveling along the delay line is periodically refreshed by a clean edge of the reference clock. In this manner, the phase noise of the ring oscillator is filtered up to half the reference frequency and the total output jitter is reduced significantly. Unfortunately, the concept of MDLL, and in general of injection locking (IL), is inherently limited to integer-N synthesis, which makes it unacceptable in practical RF systems. A first extension of injection locking to coarse fractional-N resolution has been shown in [4], in which however the fractional resolution is bounded to the inverse of the number of ring-oscillator delay stages. This paper introduces a fractional-N MDLL-based frequency synthesizer with a 1b time/digital converter (TDC), which is able to outreach the performance of inductorless fractional-N synthesizers. The prototype synthesizes frequencies between 1.6 and 1.9GHz with 190Hz resolution and achieves RMS integrated jitter of 1.4ps at 3mW power consumption, even in the worst-case of near-integer channel.
A 1.1-GHz CMOS fractional-N frequency synthesizer with a 3-b third-order ΔΣ modulator A 1.1-GHz fractional-N frequency synthesizer is implemented in 0.5-/spl mu/m CMOS employing a 3-b third-order /spl Delta//spl Sigma/ modulator. The in-band phase noise of -92 dBc/Hz at 10-kHz offset with a spur of less than -95 dBc is measured at 900.03 MHz with a phase detector frequency of 7.994 MHz and a loop bandwidth of 40 kHz. Having less than 1-Hz frequency resolution and agile switching sp...
A 13-b 40-MSamples/s CMOS pipelined folding ADC with background offset trimming Two key concepts of pipelining and background offset trimming are applied to demonstrate a 13-b 40-MSamples/s CMOS analog-to-digital converter (ADC) based on the basic folding and interpolation architecture. Folding amplifier stages made of simple differential pairs are pipelined using distributed interstage track-and-holders. Background offset trimming implemented with a highly oversampling delta-sigma modulator enhances the resolution of the CMOS folders beyond 12 bits. The background offset trimming circuit continuously measures and adjusts the offsets of the folding amplifiers without interfering with the normal operation. The prototype system is further refined using subranging and digital correction, and exhibits a spurious-free dynamic range (SFDR) of 82 dB at 40 MSamples/s. The measured differential nonlinearity (DNL) and integral nonlinearity (INL) are about /spl plusmn/0.5 and /spl plusmn/2.0 LSB, respectively. The chip fabricated in 0.5-/spl mu/m CMOS occupies 8.7 mm/sup 2/ and consumes 800 mW at 5 V.
A 5.4-Gbit/s Adaptive Continuous-Time Linear Equalizer Using Asynchronous Undersampling Histograms We demonstrate a new type of adaptive continuous-time linear equalizer (CTLE) based on asynchronous undersampling histograms. Our CTLE automatically selects the optimal equalizing filter coefficient among several predetermined values by searching for the coefficient that produces the largest peak value in histograms obtained with asynchronous undersampling. This scheme is simple and robust and does not require clock synchronization for its operation. A prototype chip realized in 0.13-μm CMOS technology successfully achieves equalization for 5.4-Gbit/s 231 - 1 pseudorandom bit sequence data through 40-, 80-, and 120-cm PCB traces and 3-m DisplayPort cable. In addition, we present the results of statistical analysis with which we verify the reliability of our scheme for various sample sizes. The results of this analysis are confirmed with experimental data.
Enhanced phase noise modeling of fractional-N frequency synthesizers. Mathematical models for the behavior of fractional-N phase-locked-loop frequency synthesizers (Frac-N) are presented. The models are intended for calculating rms phase error and determining spurs in the output of Frac-N. The models describe noise contributions due to the charge pump (CP), the phase frequency detector (PFD), the loop filter, the voltage control oscillator, and the delta-sigma modul...
Distributed estimation and quantization An algorithm is developed for the design of a nonlinear, n-sensor, distributed estimation system subject to communication and computation constraints. The algorithm uses only bivariate probability distributions and yields locally optimal estimators that satisfy the required system constraints. It is shown that the algorithm is a generalization of the classical Lloyd-Max results
Adaptive clustering for mobile wireless networks This paper describes a self-organizing, multihop, mobile radio network which relies on a code-division access scheme for multimedia support. In the proposed network architecture, nodes are organized into nonoverlapping clusters. The clusters are independently controlled, and are dynamically reconfigured as the nodes move. This network architecture has three main advantages. First, it provides spatial reuse of the bandwidth due to node clustering. Second, bandwidth can be shared or reserved in a controlled fashion in each cluster. Finally, the cluster algorithm is robust in the face of topological changes caused by node motion, node failure, and node insertion/removal. Simulation shows that this architecture provides an efficient, stable infrastructure for the integration of different types of traffic in a dynamic radio network
A dynamic analysis of the Dickson charge pump circuit Dynamics of the Dickson charge pump circuit are analyzed. The analytical results enable the estimation of the rise time of the output voltage and that of the power consumption during boosting. By using this analysis, the optimum number of stages to minimize the rise time has been estimated as 1.4 N/sub min/, where N/sub min/ is the minimum value of the number of stages necessary for a given parame...
Extermal cover times for random walks on trees
16.7 A 20V 8.4W 20MHz four-phase GaN DC-DC converter with fully on-chip dual-SR bootstrapped GaN FET driver achieving 4ns constant propagation delay and 1ns switching rise time Recently, the demand for miniaturized and fast transient response power delivery systems has been growing in high-voltage industrial electronics applications. Gallium Nitride (GaN) FETs showing a superior figure of merit (Rds, ON X Qg) in comparison with silicon FETs [1] can enable both high-frequency and high-efficiency operation in these applications, thus making power converters smaller, faster and more efficient. However, the lack of GaN-compatible high-speed gate drivers is a major impediment to fully take advantage of GaN FET-based power converters. Conventional high-voltage gate drivers usually exhibit propagation delay, tdelay, of up to several 10s of ns in the level shifter (LS), which becomes a critical problem as the switching frequency, fsw, reaches the 10MHz regime. Moreover, the switching slew rate (SR) of driving GaN FETs needs particular care in order to maintain efficient and reliable operation. Driving power GaN FETs with a fast SR results in large switching voltage spikes, risking breakdown of low-Vgs GaN devices, while slow SR leads to long switching rise time, tR, which degrades efficiency and limits fsw. In [2], large tdelay and long tR in the GaN FET driver limit its fsw to 1MHz. A design reported in [3] improves tR to 1.2ns, thereby enabling fsw up to 10MHz. However, the unregulated switching dead time, tDT, then becomes a major limitation to further reduction of tde!ay. This results in limited fsw and narrower range of VIN-VO conversion ratio. Interleaved multiphase topologies can be the most effective way to increase system fsw. However, each extra phase requires a capacitor for bootstrapped (BST) gate driving which incurs additional cost and complexity of the PCB design. Moreover, the requirements of fsw synchronization and balanced - urrent sharing for high fsw operation in multiphase implementation are challenging.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.072447
0.04
0.033423
0.022208
0.011119
0.003333
0.000333
0.000088
0
0
0
0
0
0
Comparing the performance of distributed hash tables under churn A protocol for a distributed hash table (DHT) incurs communication costs to keep up with churn – changes in membership – in order to maintain its ability to route lookups efficiently. This paper formulates a unified framework for evaluating cost and performance. Communication costs are combined into a single cost measure (bytes), and performance benefits are reduced to a single latency measure. This approach correctly accounts for background maintenance traffic and timeouts during lookup due to stale routing data, and also correctly leaves open the possibility of different preferences in the tradeoff of lookup time versus communication cost. Using the unified framework, this paper analyzes the effects of DHT parameters on the performance of four protocols under churn.
Modeling Churn in P2P Networks The objective of this paper is to introduce a model to guide the analysis of the impact of churn in P2P networks. Using this model, a variety of node membership scenarios is created. These scenarios are used to capture and analyze the performance trends of Chord, a distributed hash table (DHT) based resource lookup protocol for Peer-to-peer overlay networks. The performance study focuses both on the performance of routing and content retrieval. This study also identifies the limitations of various churn-alleviating mechanisms, frequently proposed in the literature. The study highlights the importance of the content nature and access pattern on the performance of P2P, DHT-based overlay networks. The results show that the type of content being accessed and the way the content is accessed has a significant impact on the performance of P2P networks.
Designing Less-Structured P2P Systems for the Expected High Churn We address the problem of highly transient populations in unstructured and loosely structured peer-to-peer (P2P) systems. We propose a number of illustrative query-related strategies and organizational protocols that, by taking into consideration the expected session times of peers (their lifespans), yield systems with performance characteristics more resilient to the natural instability of their environments. We first demonstrate the benefits of lifespan-based organizational protocols in terms of end-application performance and in the context of dynamic and heterogeneous Internet environments. We do this using a number of currently adopted and proposed query-related strategies, including methods for query distribution, caching, and replication. We then show, through trace-driven simulation and wide-area experimentation, the performance advantages of lifespan-based, query-related strategies when layered over currently employed and lifespan-based organizational protocols. While merely illustrative, the evaluated strategies and protocols clearly demonstrate the advantages of considering peers' session time in designing widely-deployed P2P systems.
Decentralized approach to resource availability prediction using group availability in a P2P desktop grid In a desktop grid model, the job (computational task) is submitted for execution in the resource only when the resource is idle. There is no guarantee that the job which has started to execute in a resource will complete its execution without any disruption from user activity (such as a keyboard stroke or mouse move) if the desktop machines are used for other purposes. This problem becomes more challenging in a Peer-to-Peer (P2P) model for a desktop grid where there is no central server that decides to allocate a job to a particular resource. This paper describes a P2P desktop grid framework that utilizes resource availability prediction, using group availability data. We improve the functionality of the system by submitting the jobs on machines that have a higher probability of being available at a given time. We benchmark our framework and provide an analysis of our results.
Frt-Skip Graph: A Skip Graph-Style Structured Overlay Based On Flexible Routing Tables Structured overlays enable a number of nodes to construct a logical network autonomously and search each other. Skip Graph, one of the structured overlays, constructs an overlay network based on Skip List structure and supports range queries for keys. Skip Graph manages routing tables based on random digits; therefore, the deviation of them disturbs effective utilization of the routing table entries and increases path length than the ideal value. We therefore propose FRT-Skip Graph, a novel structured overlay that solves the issues of Skip Graph and provides desirable features not in Skip Graph. FRT-Skip Graph is designed based on Flexible Routing Tables and supports range queries similarly to Skip Graph. Furthermore, it provides features derived from FRT, namely, dynamic routing table size and high extensibility.
An adaptive stabilization framework for distributed hash tables Distributed Hash Tables (DHT) algorithms obtain good lookup performance bounds by using deterministic rules to organize peer nodes into an overlay network. To preserve the invariants of the overlay network, DHTs use stabilization procedures that reorganize the topology graph when participating nodes join or fail. Most DHTs use periodic stabilization, in which peers perform stabilization at fixed intervals of time, disregarding the rate of change in overlay topology; this may lead to poor performance and large stabilization-induced communication overhead. We propose a novel adaptive stabilization framework that takes into consideration the continuous evolution in network conditions. Each peer collects statistical data about the network and dynamically adjusts its stabilization rate based on the analysis of the data. The objective of our scheme is to maintain nominal network performance and to minimize the communication overhead of stabilization.
Authenticated and persistent skip graph: a data structure for cloud based data-centric applications Cloud computing has evolved as a popular computing environment. In data centric applications hosted on the cloud, data is accessed and updated in a purely distributed manner. The distributed data structures used for dynamic storage of the data for such applications require two fundamental qualities, authentication and persistence, which are not completely met by existing distributed data structures. Authentication is a crucial requirement for data structures used on the cloud, as users need to be convinced about the validity of the data they receive. Moreover the data structure has to be persistent, so that changes can be made to the data structure without losing old data, or old versions of the data structure, which may be required by different users in the distributed environment. In this paper we present authenticated and persistent skip graph, an enhanced variant of the existing skip graph. It is based on an event based model, to correspond to the distributed data centric application requirements. Skip graphs are randomized data structures based on skip lists, proposed for use in distributed applications. But skip graphs are neither persistent nor authenticated. The fat node technique is adapted here for providing persistence, and a role based access control, with a cryptographic scheme is used to ensure valid data update and verify the validity of the retrieved data. Our data structure is proved to be efficient in terms of time and space complexities.
Towards a Common API for Structured Peer-to-Peer Overlays In this paper, we describe an ongoing effort to define common APIs for structured peer-to-peer overlays and the key abstractions that can be built on them. In doing so, we hope to facilitate independent innovation in overlay protocols, services, and applications, to allow direct experimental comparisons, and to encourage application development by third parties. We provide a snapshot of our efforts and discuss open problems in an effort to solicit feedback from the research community.
Broadband MIMO-OFDM Wireless Communications Orthogonal frequency division multiplexing (OFDM) is a popular method for high data rate wireless transmission. OFDM may be combined with antenna arrays at the transmitter and receiver to increase the diversity gain and/or to enhance the system capacity on time-varying and frequency-selective channels, resulting in a multiple-input multiple-output (MIMO) configuration. The paper explores various p...
Reliable Multimedia Transmission Over Cognitive Radio Networks Using Fountain Codes With the explosive growth of wireless multimedia applications over the wireless Internet in recent years, the demand for radio spectral resources has increased significantly. In order to meet the quality of service, delay, and large bandwidth requirements, various techniques such as source and channel coding, distributed streaming, multicast etc. have been considered. In this paper, we propose a technique for distributed multimedia transmission over the secondary user network, which makes use of opportunistic spectrum access with the help of cognitive radios. We use digital fountain codes to distribute the multimedia content over unused spectrum and also to compensate for the loss incurred due to primary user interference. Primary user traffic is modelled as a Poisson process. We develop the techniques to select appropriate channels and study the trade-offs between link reliability, spectral efficiency and coding overhead. Simulation results are presented for the secondary spectrum access model.
An artificial neural network (p,d,q) model for timeseries forecasting Artificial neural networks (ANNs) are flexible computing frameworks and universal approximators that can be applied to a wide range of time series forecasting problems with a high degree of accuracy. However, despite all advantages cited for artificial neural networks, their performance for some real time series is not satisfactory. Improving forecasting especially time series forecasting accuracy is an important yet often difficult task facing forecasters. Both theoretical and empirical findings have indicated that integration of different models can be an effective way of improving upon their predictive performance, especially when the models in the ensemble are quite different. In this paper, a novel hybrid model of artificial neural networks is proposed using auto-regressive integrated moving average (ARIMA) models in order to yield a more accurate forecasting model than artificial neural networks. The empirical results with three well-known real data sets indicate that the proposed model can be an effective way to improve forecasting accuracy achieved by artificial neural networks. Therefore, it can be used as an appropriate alternative model for forecasting task, especially when higher forecasting accuracy is needed.
A 2.87±0.19dB NF 3.1∼10.6GHz ultra-wideband low-noise amplifier using 0.18µm CMOS technology.
20.3 A feedforward controlled on-chip switched-capacitor voltage regulator delivering 10W in 32nm SOI CMOS On-chip (or fully integrated) switched-capacitor (SC) voltage regulators (SCVR) have recently received a lot of attention due to their ease of monolithic integration. The use of deep trench capacitors can lead to SCVR implementations that simultaneously achieve high efficiency, high power density, and fast response time. For the application of granular power distribution of many-core microprocessor systems, the on-chip SCVR must maintain an output voltage above a certain minimum level Uout, min in order for the microprocessor core to meet setup time requirements. Following a transient load change, the output voltage typically exhibits a droop due to parasitic inductances and resistances in the power distribution network. Therefore, the steady-state output voltage is kept high enough to ensure VOUT >Vout, min at all times, thereby introducing an output voltage overhead that leads to increased system power consumption. The output voltage droop can be reduced by implementing fast regulation and a sufficient amount of on-chip decoupling capacitance. However, a large amount of on-chip decoupling capacitance is needed to significantly reduce the droop, and it becomes impractical to implement owing to the large chip area overhead required. This paper presents a feedforward control scheme that significantly reduces the output voltage droop in the presence of a large input voltage droop following a transient event. This in turn reduces the required output voltage overhead and may lead to significant overall system power savings.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.045896
0.044889
0.033778
0.033333
0.033333
0.026111
0.015128
0.00087
0
0
0
0
0
0
SecDir: a secure directory to defeat directory side-channel attacks Directories for cache coherence have been recently shown to be vulnerable to conflict-based side-channel attacks. By forcing directory conflicts, an attacker can evict victim directory entries, which in turn trigger the eviction of victim cache lines from private caches. This evidence strongly suggests that directories need to be redesigned for security. The key to a secure directory is to block interference between processes. Sadly, in an environment with many cores, this is hard or expensive to do. This paper presents the first design of a scalable secure directory. We call it SecDir. SecDir takes part of the storage used by a conventional directory and re-assigns it to per-core private directory areas used in a victim-cache manner called Victim Directories (VDs). The partitioned nature of VDs prevents directory interference across cores, defeating directory side-channel attacks. The VD of a core is distributed, and holds as many entries as lines in the private L2 cache of the core. To minimize victim self-conflicts in a VD during an attack, a VD is organized as a cuckoo directory. Such a design also obscures the victim's conflict patterns from the attacker. For our evaluation, we model with simulations the directory of an Intel Skylake-X server with and without SecDir. Our results show that SecDir has a negligible performance overhead. Furthermore, SecDir is area-efficient.
TILE64 - Processor: A 64-Core SoC with Mesh Interconnect The TILE64TM processor is a multicore SoC targeting the high-performance demands of a wide range of embedded applications across networking and digital multimedia applications. A figure shows a block diagram with 64 tile processors arranged in an 8x8 array. These tiles connect through a scalable 2D mesh network with high-speed I/Os on the periphery. Each general-purpose processor is identical and capable of running SMP Linux.
Dynamic adaptive virtual core mapping to improve power, energy, and performance in multi-socket multicores Consider a multithreaded parallel application running inside a multicore virtual machine context that is itself hosted on a multi-socket multicore physical machine. How should the VMM map virtual cores to physical cores? We compare a local mapping, which compacts virtual cores to processor sockets, and an interleaved mapping, which spreads them over the sockets. Simply choosing between these two mappings exposes clear tradeoffs between performance, energy, and power. We then describe the design, implementation, and evaluation of a system that automatically and dynamically chooses between the two mappings. The system consists of a set of efficient online VMM-based mechanisms and policies that (a) capture the relevant characteristics of memory reference behavior, (b) provide a policy and mechanism for configuring the mapping of virtual machine cores to physical cores that optimizes for power, energy, or performance, and (c) drive dynamic migrations of virtual cores among local physical cores based on the workload and the currently specified objective. Using these techniques we demonstrate that the performance of SPEC and PARSEC benchmarks can be increased by as much as 66%, energy reduced by as much as 31%, and power reduced by as much as 17%, depending on the optimization objective.
Whispers in the Hyper-Space: High-Bandwidth and Reliable Covert Channel Attacks Inside the Cloud Privacy and information security in general are major concerns that impede enterprise adaptation of shared or public cloud computing. Specifically, the concern of virtual machine (VM) physical co-residency stems from the threat that hostile tenants can leverage various forms of side channels (such as cache covert channels) to exfiltrate sensitive information of victims on the same physical system. However, on virtualized x86 systems, covert channel attacks have not yet proven to be practical, and thus the threat is widely considered a “potential risk.” In this paper, we present a novel covert channel attack that is capable of high-bandwidth and reliable data transmission in the cloud. We first study the application of existing cache channel techniques in a virtualized environment and uncover their major insufficiency and difficulties. We then overcome these obstacles by: 1) redesigning a pure timing-based data transmission scheme, and 2) exploiting the memory bus as a high-bandwidth covert channel medium. We further design and implement a robust communication protocol and demonstrate realistic covert channel attacks on various virtualized x86 systems. Our experimental results show that covert channels do pose serious threats to information security in the cloud. Finally, we discuss our insights on covert channel mitigation in virtualized environments.
Sliding Right Into Disaster: Left-To-Right Sliding Windows Leak It is well known that constant-time implementations of modular exponentiation cannot use sliding windows. However, software libraries such as Libgcrypt, used by GnuPG, continue to use sliding windows. It is widely believed that, even if the complete pattern of squarings and multiplications is observed through a side-channel attack, the number of exponent bits leaked is not sufficient to carry out a full key-recovery attack against RSA. Specifically, 4-bit sliding windows leak only 40% of the bits, and 5-bit sliding windows leak only 33% of the bits.In this paper we demonstrate a complete break of RSA-1024 as implemented in Libgcrypt. Our attack makes essential use of the fact that Libgcrypt uses the left-to-right method for computing the sliding-window expansion. We show for the first time that the direction of the encoding matters: the pattern of squarings and multiplications in left-to-right sliding windows leaks significantly more information about the exponent than right-to-left. We show how to extend the Heninger-Shacham algorithm for partial key reconstruction to make use of this information and obtain a very efficient full key recovery for RSA-1024. For RSA-2048 our attack is efficient for 13% of keys.
NetCAT: Practical Cache Attacks from the Network Increased peripheral performance is causing strain on the memory subsystem of modern processors. For example, available DRAM throughput can no longer sustain the traffic of a modern network card. Scrambling to deliver the promised performance, instead of transferring peripheral data to and from DRAM, modern Intel processors perform I/O operations directly on the Last Level Cache (LLC). While Direct Cache Access (DCA) instead of Direct Memory Access (DMA) is a sensible performance optimization, it is unfortunately implemented without care for security, as the LLC is now shared between the CPU and all the attached devices, including the network card.In this paper, we reverse engineer the behavior of DCA, widely referred to as Data-Direct I/O (DDIO), on recent Intel processors and present its first security analysis. Based on our analysis, we present NetCAT, the first Network-based PRIME+PROBE Cache Attack on the processor's LLC of a remote machine. We show that NetCAT not only enables attacks in cooperative settings where an attacker can build a covert channel between a network client and a sandboxed server process (without network), but more worryingly, in general adversarial settings. In such settings, NetCAT can enable disclosure of network timing-based sensitive information. As an example, we show a keystroke timing attack on a victim SSH connection belonging to another client on the target server. Our results should caution processor vendors against unsupervised sharing of (additional) microarchitectural components with peripherals exposed to malicious input.
ABSynthe: Automatic Blackbox Side-channel Synthesis on Commodity Microarchitectures
The part-time parliament Recent archaeological discoveries on the island of Paxos reveal that the parliament functioned despite the peripatetic propensity of its part-time legislators. The legislators maintained consistent copies of the parliamentary record, despite their frequent forays from the chamber and the forgetfulness of their messengers. The Paxon parliament's protocol provides a new way of implementing the state machine approach to the design of distributed systems.
Multi-Strategy Coevolving Aging Particle Optimization We propose Multi-Strategy Coevolving Aging Particles (MS-CAP), a novel population-based algorithm for black-box optimization. In a memetic fashion, MS-CAP combines two components with complementary algorithm logics. In the first stage, each particle is perturbed independently along each dimension with a progressively shrinking (decaying) radius, and attracted towards the current best solution with an increasing force. In the second phase, the particles are mutated and recombined according to a multi-strategy approach in the fashion of the ensemble of mutation strategies in Differential Evolution. The proposed algorithm is tested, at different dimensionalities, on two complete black-box optimization benchmarks proposed at the Congress on Evolutionary Computation 2010 and 2013. To demonstrate the applicability of the approach, we also test MS-CAP to train a Feedforward Neural Network modeling the kinematics of an 8-link robot manipulator. The numerical results show that MS-CAP, for the setting considered in this study, tends to outperform the state-of-the-art optimization algorithms on a large set of problems, thus resulting in a robust and versatile optimizer.
Analog-to-digital converter survey and analysis Analog-to-digital converters (ADCs) are ubiquitous, critical components of software radio and other signal processing systems. This paper surveys the state-of-the-art of ADCs, including experimental converters and commercially available parts. The distribution of resolution versus sampling rate provides insight into ADC performance limitations. At sampling rates below 2 million samples per second (Gs/s), resolution appears to be limited by thermal noise. At sampling rates ranging from ~2 Ms/s to ~4 giga samples per second (Gs/s), resolution falls off by ~1 bit for every doubling of the sampling rate. This behavior may be attributed to uncertainty in the sampling instant due to aperture jitter. For ADCs operating at multi-Gs/s rates, the speed of the device technology is also a limiting factor due to comparator ambiguity. Many ADC architectures and integrated circuit technologies have been proposed and implemented to push back these limits. The trend toward single-chip ADCs brings lower power dissipation. However, technological progress as measured by the product of the ADC resolution (bits) times the sampling rate is slow. Average improvement is only ~1.5 bits for any given sampling frequency over the last six-eight years
A process calculus for Mobile Ad Hoc Networks We present the @w-calculus, a process calculus for formally modeling and reasoning about Mobile Ad Hoc Wireless Networks (MANETs) and their protocols. The @w-calculus naturally captures essential characteristics of MANETs, including the ability of a MANET node to broadcast a message to any other node within its physical transmission range (and no others), and to move in and out of the transmission range of other nodes in the network. A key feature of the @w-calculus is the separation of a node's communication and computational behavior, described by an @w-process, from the description of its physical transmission range, referred to as an @w-process interface. Our main technical results are as follows. We give a formal operational semantics of the @w-calculus in terms of labeled transition systems and show that the state reachability problem is decidable for finite-control @w-processes. We also prove that the @w-calculus is a conservative extension of the @p-calculus, and that late bisimulation equivalence (appropriately lifted from the @p-calculus to the @w-calculus) is a congruence. Congruence results are also established for a weak version of late bisimulation equivalence, which abstracts away from two types of internal actions: @t-actions, as in the @p-calculus, and @m-actions, signaling node movement. We additionally define a symbolic semantics for the @w-calculus extended with the mismatch operator, along with a corresponding notion of symbolic bisimulation equivalence, and establish congruence results for this extension as well. Finally, we illustrate the practical utility of the calculus by developing and analyzing formal models of a leader election protocol for MANETs and the AODV routing protocol.
A 13-b 40-MSamples/s CMOS pipelined folding ADC with background offset trimming Two key concepts of pipelining and background offset trimming are applied to demonstrate a 13-b 40-MSamples/s CMOS analog-to-digital converter (ADC) based on the basic folding and interpolation architecture. Folding amplifier stages made of simple differential pairs are pipelined using distributed interstage track-and-holders. Background offset trimming implemented with a highly oversampling delta-sigma modulator enhances the resolution of the CMOS folders beyond 12 bits. The background offset trimming circuit continuously measures and adjusts the offsets of the folding amplifiers without interfering with the normal operation. The prototype system is further refined using subranging and digital correction, and exhibits a spurious-free dynamic range (SFDR) of 82 dB at 40 MSamples/s. The measured differential nonlinearity (DNL) and integral nonlinearity (INL) are about /spl plusmn/0.5 and /spl plusmn/2.0 LSB, respectively. The chip fabricated in 0.5-/spl mu/m CMOS occupies 8.7 mm/sup 2/ and consumes 800 mW at 5 V.
Integration operators for generating RDF/OWL-based user defined mediator views in a grid environment Research and development activities relating to the grid have generally focused on applications where data is stored in files. However, many scientific and commercial applications are highly dependent on Information Servers (ISs) for storage and organization of their data. A data-information system that supports operations on multiple information servers in a grid environment is referred to as an interoperable grid system. Different perceptions by end-users of interoperable systems in a grid environment may lead to different reasons for integrating data. Even the same user might want to integrate the same distributed data in various ways to suit different needs, roles or tasks. Therefore multiple mediator views are needed to support this diversity. This paper describes our approach to supporting semantic interoperability in a heterogeneous multi-information server grid environment. It is based on using Integration Operators for generating multiple semantically rich RDF/OWL-based user defined mediator views above the grid participating ISs. These views support different perceptions of the distributed and heterogeneous data available. A set of grid services are developed for the implementation of the mediator views.
A Sub- $\mu$ W Reconfigurable Front-End for Invasive Neural Recording That Exploits the Spectral Characteristics of the Wideband Neural Signal This paper presents a sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> ac-coupled reconfigurable front-end for invasive wideband neural signal recording. The proposed topology embeds filtering capabilities enabling the selection of different frequency bands inside the neural signal spectrum. Power consumption is optimized by defining specific noise targets for each sub-band. These targets take into account the spectral characteristics of wideband neural signals: local field potentials (LFP) exhibit <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mathrm {1/f^{x}}$ </tex-math></inline-formula> magnitude scaling while action potentials (AP) show uniform magnitude across frequency. Additionally, noise targets also consider electrode noise and the spectral distribution of noise sources in the circuit. An experimentally verified prototype designed in a standard 180 nm CMOS process draws 815 nW from a 1 V supply. The front-end is able to select among four different frequency bands (modes) up to 5 kHz. The measured input-referred spot-noise at 500 Hz in the LFP mode (1 Hz - 700 Hz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$55~nV/\sqrt {Hz}$ </tex-math></inline-formula> while the integrated noise in the AP mode (200 Hz - 5 kHz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$4.1~\mu Vrms$ </tex-math></inline-formula> . The proposed front-end achieves sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> operation without penalizing other specifications such as input swing, common-mode or power-supply rejection ratios. It reduces the power consumption of neural front-ends with spectral selectivity by <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$6.1\times $ </tex-math></inline-formula> and, compared with conventional wideband front-ends, it obtains a reduction of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.5\times $ </tex-math></inline-formula> .
1.1
0.1
0.1
0.1
0.1
0.1
0.05
0
0
0
0
0
0
0
Artificial Intelligence Service by Satellite Networks based on Ensemble Learning with Cloud-Edge-End Integration Satellite network is the necessary supplement to ground network for wireless coverage for remote area. At the same time, Deep Neural Networks (DNN) based artificial intelligence (AI) applications has been widely applied in many aspects of people's life, but most DNN models are deployed in cloud due to their high complexity. Since cloud is far from end users in remote area, cloud based AI service cannot meet the latency requirement for users at remote area. Satellite based edge computing is a promising solution to this problem, but the limited resources on satellite platform cannot support complex models. In this paper, we propose to integrate the cloud, satellite edges and the end users to form an AI service framework, and apply ensemble learning to provide efficient and reliable AI service. Two case studies on image classification task are performed to show the effectiveness of the proposed scheme.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A 56-Gb/s 8-mW PAM4 CDR/DMUX With High Jitter Tolerance The demand for low-power wireline circuits has motivated extensive work on novel circuit solutions. This article describes a one-eighth-rate clock and data recovery (CDR) circuit and a demultiplexer (DMUX) for processing four-level pulse-amplitude modulation (PAM4) signals in receivers (RXs). Detecting both major and minor data transitions, the proposed architecture can achieve a wider loop bandwidth (BW), suppressing oscillator phase noise and improving the jitter tolerance. Fabricated in 28-nm CMOS technology, the prototype provides a jitter transfer BW of 160 MHz with a tolerance of 1 UI at 10 MHz.
A 3x9 Gb/s Shared, All-Digital CDR for High-Speed, High-Density I/O. This paper presents a novel all-digital CDR scheme in 90 nm CMOS. Two independently adjustable clock phases are generated from a delay line calibrated to 2 UI. One clock phase is placed in the middle of the eye to recover the data (“data clock”) and the other is swept across the delay line (“search clock”). As the search clock is swept, its samples are compared against the data samples to generate...
0.6–2.7-Gb/s Referenceless Parallel CDR With a Stochastic Dispersion-Tolerant Frequency Acquisition Technique A 0.6-2.7-Gb/s phase-rotator-based four-channel digital clock and data recovery (CDR) IC featuring a low-power dispersion-tolerant referenceless frequency acquisition technique is presented. A quasi-periodic reference clock signal extracted directly from a dispersed input signal is distributed to digitally controlled phase rotators in the CDR ICs for phase acquisition. A multiphase frequency acquisition scheme is employed for the reduction of the clock jitter. The measurement results show that the proposed design offers a lower frequency offset and clock noise floor under channel dispersion, as compared with conventional designs. The proposed four-channel digital CDR IC is fabricated in a 90-nm CMOS process. The figure of merit for a single channel is 8 mW/Gb/s such as a feedforward equalizer, a decision-feedback equalizer, and a referenceless CDR.
A 6-Gb/s adaptive-loop-bandwidth clock and data recovery (CDR) circuit An adaptive circuit is proposed to adjust CDR loop bandwidth based on different jitter spectral profile for better jitter performance. The preventional lock detector (PLD) is employed to achieve better jitter suppression ability without jitter tolerance (JTOL) degradation. The proposed circuit enhances the jitter suppression by 14.14 dB at an 8-MHz sinusoidal jitter source. This adaptive block is fully-digital synthesized and the whole circuit consumes 86.4 mW for a 6-Gb/s input data.
High-Performance Harmonic-Rich Single-Core VCO With Multi-LC Tank: A Tutorial This brief overviews the development of high-performance single-core voltage-controlled oscillators (VCOs) employing a multi-LC tank over the past two decades. From the circuit-design perspective, we study the key breakthroughs for constantly heightening the figure-of-merit and lowering the 1/f <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">3</sup> phase-noise corner, according to the noise <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">source</i> and <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">transfer</i> , <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">narrow</i> and <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">wideband</i> performance optimization, as well as one- and two-dimensional (1D/2D) tuning. We also aim to present the prospects of single-core VCOs.
A 140-Mb/s to 1.82-Gb/s continuous-rate embedded clock receiver for flat-panel displays A wide-range fast-locking embedded clock receiver, which can provide a continuous data rate of 140 Mb/s to 1.82 Gb/s in a 0.25-µm CMOS process, is presented. A fast lock time of 7.5 µs and a small root-mean-square jitter of 15 ps are achieved by using the proposed frequency-band selection and frequency acquisition schemes, as well as a simple linear-phase detector. The implemented embedded clock receiver occupies 2.00 mm2 and consumes currents of 44 and 137 mA at 140 Mb/s and 1.82 Gb/s, respectively, including input/output currents.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Measuring the Gap Between FPGAs and ASICs ABSTRACT This paper presents experimental measurements of the differences between a 90nm CMOS FPGA and 90nm CMOS Standard Cell ASICs in terms of logic density, circuit speed and power consumption. We are motivated to make these measurements to enable system designers to make better informed choices between these two media and to give insight to FPGA makers on the deciencies to attack and thereby improve FPGAs. In the paper, we describe the methodology by which the measurements were obtained and we show that, for circuits containing only combinational logic and,ipops, the ratio of silicon area required to implement them in FPGAs and ASICs is on average 40. Modern FPGAs also contain \hard" blocks such as multiplier/accumulators and block memories,and we nd,that these blocks reduce this average area gap signican tly to as little as 21. The ratio of critical path delay, from FPGA to ASIC, is roughly 3 to 4, with less inuence from block memory and hard multipliers. The dynamic power consumption ratio is approximately 12 times and, with hard blocks, this gap generally becomes smaller. Categories and Subject Descriptors
Termination detection for diffusing computations
Distributed multi-agent optimization with state-dependent communication We study distributed algorithms for solving global optimization problems in which the objective function is the sum of local objective functions of agents and the constraint set is given by the intersection of local constraint sets of agents. We assume that each agent knows only his own local objective function and constraint set, and exchanges information with the other agents over a randomly varying network topology to update his information state. We assume a state-dependent communication model over this topology: communication is Markovian with respect to the states of the agents and the probability with which the links are available depends on the states of the agents. We study a projected multi-agent subgradient algorithm under state-dependent communication. The state-dependence of the communication introduces significant challenges and couples the study of information exchange with the analysis of subgradient steps and projection errors. We first show that the multi-agent subgradient algorithm when used with a constant stepsize may result in the agent estimates to diverge with probability one. Under some assumptions on the stepsize sequence, we provide convergence rate bounds on a “disagreement metric” between the agent estimates. Our bounds are time-nonhomogeneous in the sense that they depend on the initial starting time. Despite this, we show that agent estimates reach an almost sure consensus and converge to the same optimal solution of the global optimization problem with probability one under different assumptions on the local constraint sets and the stepsize sequence.
Practical Mitigations for Timing-Based Side-Channel Attacks on Modern x86 Processors This paper studies and evaluates the extent to which automated compiler techniques can defend against timing-based side-channel attacks on modern x86 processors. We study how modern x86 processors can leak timing information through side-channels that relate to control flow and data flow. To eliminate key-dependent control flow and key-dependent timing behavior related to control flow, we propose the use of if-conversion in a compiler backend, and evaluate a proof-of-concept prototype implementation. Furthermore, we demonstrate two ways in which programs that lack key-dependent control flow and key-dependent cache behavior can still leak timing information on modern x86 implementations such as the Intel Core 2 Duo, and propose defense mechanisms against them.
A Dht-Based Discovery Service For The Internet Of Things Current trends towards the Future Internet are envisaging the conception of novel services endowed with context-aware and autonomic capabilities to improve end users' quality of life. The Internet of Things paradigm is expected to contribute towards this ambitious vision by proposing models and mechanisms enabling the creation of networks of "smart things" on a large scale. It is widely recognized that efficient mechanisms for discovering available resources and capabilities are required to realize such vision. The contribution of this work consists in a novel discovery service for the Internet of Things. The proposed solution adopts a peer-to-peer approach for guaranteeing scalability, robustness, and easy maintenance of the overall system. While most existing peer-to-peer discovery services proposed for the IoT support solely exact match queries on a single attribute (i. e., the object identifier), our solution can handle multiattribute and range queries. We defined a layered approach by distinguishing three main aspects: multiattribute indexing, range query support, peer-to-peer routing. We chose to adopt an over-DHT indexing scheme to guarantee ease of design and implementation principles. We report on the implementation of a Proof of Concept in a dangerous goods monitoring scenario, and, finally, we discuss test results for structural properties and query performance evaluation.
P2P-Based Service Distribution over Distributed Resources Dynamic or demand-driven service deployment in a Grid or Cloud environment is an important issue considering the varying nature of demand. Most distributed frameworks either offer static service deployment which results in resource allocation problems, or, are job-based where for each invocation, the job along with the data has to be transferred for remote execution resulting in increased communication cost. An alternative approach is dynamic demand-driven provisioning of services as proposed in earlier literature, but the proposed methods fail to account for the volatility of resources in a Grid environment. In this paper, we propose a unique peer-to-peer based approach for dynamic service provisioning which incorporates a Bit-Torrent like protocol for provisioning the service on a remote node. Being built around a P2P model, the proposed framework caters to resource volatility and also incurs lower provisioning cost.
A 1V 3.5 μW Bio-AFE With Chopper-Capacitor-Chopper Integrator-Based DSL and Low Power GM-C Filter This brief presents a low-noise, low-power bio-signal acquisition analog front-end (Bio-AFE). It mainly includes a capacitively coupled chopper-stabilized instrumentation amplifier (CCIA), a programmable gain amplifier (PGA), a low-pass filter (LPF), and a successive approximation analog to digital converter (SAR ADC). A chopper-capacitor-chopper integrator based DC servo loop (C3IB-DSL...
1.2
0.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
0
Impact of Nonlinear Devices in Software Radio Signals Software radio signals with several channels have high dynamic range, making them very prone to nonlinear distortion effects, namely those inherent to an efficient power amplification. In this paper we present analytical approach evaluation for evaluating the impact of nonlinear devices in software radio signals. As an application, we will consider the nonlinear amplification of software radio signals where we have a high number of channels with substantially different powers.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
GraphP: Reducing Communication for PIM-Based Graph Processing with Efficient Data Partition Processing-In-Memory (PIM) is an effective technique that reduces data movements by integrating processing units within memory. The recent advance of “big data” and 3D stacking technology make PIM a practical and viable solution for the modern data processing workloads. It is exemplified by the recent research interests on PIM-based acceleration. Among them, TESSERACT is a PIM-enabled parallel graph processing architecture based on Micron's Hybrid Memory Cube (HMC), one of the most prominent 3D-stacked memory technologies. It implements a Pregel-like vertex-centric programming model, so that users could develop programs in the familiar interface while taking advantage of PIM. Despite the orders of magnitude speedup compared to DRAM-based systems, TESSERACT generates excessive crosscube communications through SerDes links, whose bandwidth is much less than the aggregated local bandwidth of HMCs. Our investigation indicates that this is because of the restricted data organization required by the vertex programming model. In this paper, we argue that a PIM-based graph processing system should take data organization as a first-order design consideration. Following this principle, we propose GraphP, a novel HMC-based software/hardware co-designed graph processing system that drastically reduces communication and energy consumption compared to TESSERACT. GraphP features three key techniques. 1) “Source-cut” partitioning, which fundamentally changes the cross-cube communication from one remote put per cross-cube edge to one update per replica. 2) “Two-phase Vertex Program”, a programming model designed for the “source-cut” partitioning with two operations: GenUpdate and ApplyUpdate. 3) Hierarchical communication and overlapping, which further improves performance with unique opportunities offered by the proposed partitioning and programming model. We evaluate GraphP using a cycle accurate simulator with 5 real-world graphs and 4 algorithms. The results show that it provides on average 1.7 speedup and 89% energy saving compared to TESSERACT.
Toward standardized near-data processing with unrestricted data placement for GPUs 3D-stacked memory devices with processing logic can help alleviate the memory bandwidth bottleneck in GPUs. However, in order for such Near-Data Processing (NDP) memory stacks to be used for different GPU architectures, it is desirable to standardize the NDP architecture. Our proposal enables this standardization by allowing data to be spread across multiple memory stacks as is the norm in high-performance systems without an MMU on the NDP stack. The keys to this architecture are the ability to move data between memory stacks as required for computation, and a partitioned execution mechanism that offloads memory-intensive application segments onto the NDP stack and decouples address translation from DRAM accesses. By enhancing this system with a smart offload selection mechanism that is cognizant of the compute capability of the NDP and cache locality on the host processor, system performance and energy are improved by up to 66.8% and 37.6%, respectively.
FAFNIR: Accelerating Sparse Gathering by Using Efficient Near-Memory Intelligent Reduction Memory-bound sparse gathering, caused by irregular random memory accesses, has become an obstacle in several on-demand applications such as embedding lookup in recommendation systems. To reduce the amount of data movement, and thereby better utilize memory bandwidth, previous studies have proposed near-data processing (NDP) solutions. The issue of prior work, however, is that they either minimize ...
Evolution of Memory Architecture Computer memories continue to serve the role that they first served in the electronic discrete variable automatic computer (EDVAC) machine documented by John von Neumann, namely that of supplying instructions and operands for calculations in a timely manner. As technology has made possible significantly larger and faster machines with multiple processors, the relative distance in processor cycles ...
Near memory key/value lookup acceleration. In the "Big Data" era, fast lookup of keys in a key/value store is a ubiquitous operation. We have designed a near memory accelerator combining simple hardware building blocks to accelerate lookup in a hash table based key/value store. We report on the co-design of hardware and software to accomplish fast lookup using open addressing. The accelerator implements a batch get command to look up a set of keys in a single request. Using an FPGA emulator, we evaluate the performance of a query workload under a comprehensive range of conditions such as hash table load factor (fill) and query key repeat distribution (likelihood of a key to reappear in a query workload). We emulate two memory configurations: Hybrid Memory Cube (or High Bandwidth Memory), and Storage Class Memory. Our design shows 12.8X -2.9X speedup compared to conventional CPU lookup depending on workload characteristics.
The Influence of the Sigmoid Function Parameters on the Speed of Backpropagation Learning Sigmoid function is the most commonly known function used in feed forward neural networks because of its nonlinearity and the computational simplicity of its derivative. In this paper we discuss a variant sigmoid function with three parameters that denote the dynamic range, symmetry and slope of the function respectively. We illustrate how these parameters influence the speed of backpropagation learning and introduce a hybrid sigmoidal network with different parameter configuration in different layers. By regulating and modifying the sigmoid function parameter configuration in different layers the error signal problem, oscillation problem and asymmetrical input problem can be reduced. To compare the learning capabilities and the learning rate of the hybrid sigmoidal networks with the conventional networks we have tested the two-spirals benchmark that is known to be a very difficult task for backpropagation and their relatives.
Data access optimization in a processing-in-memory system The Active Memory Cube (AMC) system is a novel heterogeneous computing system concept designed to provide high performance and power-efficiency across a range of applications. The AMC architecture includes general-purpose host processors and specially designed in-memory processors (processing lanes) that would be integrated in a logic layer within 3D DRAM memory. The processing lanes have large vector register files but no power-hungry caches or local memory buffers. Performance depends on how well the resulting higher effective memory latency within the AMC can be managed. In this paper, we describe a combination of programming language features, compiler techniques, operating system interfaces, and hardware design that can effectively hide memory latency for the processing lanes in an AMC system. We present experimental data to show how this approach improves the performance of a set of representative benchmarks important in high performance computing applications. As a result, we are able to achieve high performance together with power efficiency using the AMC architecture.
Scheduling Techniques for GPU Architectures with Processing-In-Memory Capabilities. Processing data in or near memory (PIM), as opposed to in conventional computational units in a processor, can greatly alleviate the performance and energy penalties of data transfers from/to main memory. Graphics Processing Unit (GPU) architectures and applications, where main memory bandwidth is a critical bottleneck, can benefit from the use of PIM. To this end, an application should be properly partitioned and scheduled to execute on either the main, powerful GPU cores that are far away from memory or the auxiliary, simple GPU cores that are close to memory (e.g., in the logic layer of 3D-stacked DRAM). This paper investigates two key code scheduling issues in such a GPU architecture that has PIM capabilities, to maximize performance and energy-efficiency: (1) how to automatically identify the code segments, or kernels, to be offloaded to the cores in memory, and (2) how to concurrently schedule multiple kernels on the main GPU cores and the auxiliary GPU cores in memory. We develop two new runtime techniques: (1) a regression-based affinity prediction model and mechanism that accurately identifies which kernels would benefit from PIM and offloads them to GPU cores in memory, and (2) a concurrent kernel management mechanism that uses the affinity prediction model, a new kernel execution time prediction model, and kernel dependency information to decide which kernels to schedule concurrently on main GPU cores and the GPU cores in memory. Our experimental evaluations across 25 GPU applications demonstrate that these two techniques can significantly improve both application performance (by 25% and 42%, respectively, on average) and energy efficiency (by 28% and 27%).
Big Data: A Survey. In this paper, we review the background and state-of-the-art of big data. We first introduce the general background of big data and review related technologies, such as could computing, Internet of Things, data centers, and Hadoop. We then focus on the four phases of the value chain of big data, i.e., data generation, data acquisition, data storage, and data analysis. For each phase, we introduce the general background, discuss the technical challenges, and review the latest advances. We finally examine the several representative applications of big data, including enterprise management, Internet of Things, online social networks, medial applications, collective intelligence, and smart grid. These discussions aim to provide a comprehensive overview and big-picture to readers of this exciting area. This survey is concluded with a discussion of open problems and future directions.
Energy-Efficient Communication Protocol for Wireless Microsensor Networks Wireless distributed micro-sensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks.Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multihop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster base stations (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic net-works, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show that LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional routing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.
The path to the software-defined radio receiver After being the subject of speculation for many years, a software-defined radio receiver concept has emerged that is suitable for mobile handsets. A key step forward is the realization that in mobile handsets, it is enough to receive one channel with any bandwidth, situated in any band. Thus, the front-end can be tuned electronically. Taking a cue from a digital front-end, the receiver&#39;s flexible ...
On the Design of a Reconfigurable OTA-C Filter for Software Radio This paper presents a novel approach in the design of integrated reconfigurable and programmable analog filters for commercial mobile applications. The method is described in the specific context of an analog filter which is capable of operating in both the complex and real domains; the filter is designed using the OTA-C technique, features a programmable order and transfer function and is envisioned as part of a combined low- IF/zero-IF receiver architecture for a software defined radio transceiver. The novel method is demonstrated by presenting the simulation results of the real low- pass/complex band-pass filter for a 0.5 dB Chebyshev approximation.
Time-Varying Input and State Delay Compensation for Uncertain Nonlinear Systems. A robust controller is developed for uncertain, second-order nonlinear systems subject to simultaneous unknown, time-varying state delays and known, sufficiently small time-varying input delays in addition to additive, sufficiently smooth disturbances. An integral term composed of previous control values facilitates a delay-free open-loop error system and the development of the feedback control structure. A stability analysis based on Lyapunov-Krasovskii (LK) functionals guarantees uniformly ultimately bounded tracking under the assumption that the delays are bounded and slowly varying. Numerical simulations illustrate robustness of the developed method to various combinations of simultaneous input and state delays.
A Sub- $\mu$ W Reconfigurable Front-End for Invasive Neural Recording That Exploits the Spectral Characteristics of the Wideband Neural Signal This paper presents a sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> ac-coupled reconfigurable front-end for invasive wideband neural signal recording. The proposed topology embeds filtering capabilities enabling the selection of different frequency bands inside the neural signal spectrum. Power consumption is optimized by defining specific noise targets for each sub-band. These targets take into account the spectral characteristics of wideband neural signals: local field potentials (LFP) exhibit <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mathrm {1/f^{x}}$ </tex-math></inline-formula> magnitude scaling while action potentials (AP) show uniform magnitude across frequency. Additionally, noise targets also consider electrode noise and the spectral distribution of noise sources in the circuit. An experimentally verified prototype designed in a standard 180 nm CMOS process draws 815 nW from a 1 V supply. The front-end is able to select among four different frequency bands (modes) up to 5 kHz. The measured input-referred spot-noise at 500 Hz in the LFP mode (1 Hz - 700 Hz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$55~nV/\sqrt {Hz}$ </tex-math></inline-formula> while the integrated noise in the AP mode (200 Hz - 5 kHz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$4.1~\mu Vrms$ </tex-math></inline-formula> . The proposed front-end achieves sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> operation without penalizing other specifications such as input swing, common-mode or power-supply rejection ratios. It reduces the power consumption of neural front-ends with spectral selectivity by <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$6.1\times $ </tex-math></inline-formula> and, compared with conventional wideband front-ends, it obtains a reduction of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.5\times $ </tex-math></inline-formula> .
1.025125
0.025333
0.025
0.025
0.025
0.025
0.0202
0.011527
0.00016
0
0
0
0
0
A 6.5 GHz wideband CMOS low noise amplifier for multi-band use LNA based on a noise-cancelled common gate topology spans 0.1 to 6.5 GHz with a gain of 19 dB, a NF of 3 dB, and s11 < -10 dB. It is realized in 0.13-mum CMOS and dissipates 12 mW
Charge-domain signal processing of direct RF sampling mixer with discrete-time filters in Bluetooth and GSM receivers RF circuits for multi-GHz frequencies have recently migrated to low-cost digital deep-submicron CMOS processes. Unfortunately, this process environment, which is optimized only for digital logic and SRAM memory, is extremely unfriendly for conventional analog and RF designs. We present fundamental techniques recently developed that transform the RF and analog circuit design complexity to digitally intensive domain for a wireless RF transceiver, so that it enjoys benefits of digital and switched-capacitor approaches. Direct RF sampling techniques allow great flexibility in reconfigurable radio design. Digital signal processing concepts are used to help relieve analog design complexity, allowing one to reduce cost and power consumption in a reconfigurable design environment. The ideas presented have been used in Texas Instruments to develop two generations of commercial digital RF processors: a single-chip Bluetooth radio and a single-chip GSM radio. We further present details of the RF receiver front end for a GSM radio realized in a 90-nm digital CMOS technology. The circuit consisting of low-noise amplifier, transconductance amplifier, and switching mixer offers 32.5 dB dynamic range with digitally configurable voltage gain of 40 dB down to 7.5dB. A series of decimation and discrete-time filtering follows the mixer and performs a highly linear second-order lowpass filtering to reject close-in interferers. The front-end gains can be configured with an automatic gain control to select an optimal setting to form a trade-off between noise figure and linearity and to compensate the process and temperature variations. Even under the digital switching activity, noise figure at the 40 dB maximum gain is 1.8 dB and +50 dBm IIP2 at the 34 dB gain. The variation of the input matching versus multiple gains is less than 1 dB. The circuit in total occupies 3.1 mm2 . The LNA, TA, and mixer consume less than 15.3 mA at a supply voltage of 1.4 V.
Low-Area Active-Feedback Low-Noise Amplifier Design in Scaled Digital CMOS The emerging concept of multistandard radios calls for low-noise amplifier (LNA) solutions able to comply with their needs. Meanwhile, the increasing cost of scaled CMOS pushes towards low-area solutions in standard, digital CMOS. Feedback LNAs are able to meet both demands. This paper is devoted to the design of low-area active-feedback LNAs. We discuss the design of wideband, narrowband and multiband implementations. We demonstrate that competitive RF performance is achievable thanks to CMOS downscaling, pleasing many applications because of their low cost (digital CMOS) and low area (bondpad size).
Second-order intermodulation mechanisms in CMOS downconverters An in-depth analysis of the mechanisms responsible for second-order intermodulation distortion in CMOS active downconverters is proposed in this paper. The achievable second-order input intercept point (IIP2) has a fundamental limit due to nonlinearity and mismatches in the switching stage and improves with technology scaling. Second-order intermodulation products generated by the input transcondu...
Software-defined radio receiver: dream to reality This article describes a fully integrated 90 nm CMOS software-defined radio receiver operating in the 800 MHz to 5 GHz band. Unlike the classical SDR paradigm, which digitizes the whole spectrum uniformly, this receiver acts as a signal conditioner for the analog-to-digital converters, emphasizing only the wanted channel. Thus, the ADCs operate with modest resolution and sample rate, consuming low power. This approach makes portable SDR a reality
Wideband Balun-LNA With Simultaneous Output Balancing, Noise-Canceling and Distortion-Canceling An inductorless low-noise amplifier (LNA) with active balun is proposed for multi-standard radio applications between 100 MHz and 6 GHz. It exploits a combination of a common-gate (CGH) stage and an admittance-scaled common-source (CS) stage with replica biasing to maximize balanced operation, while simultaneously canceling the noise and distortion of the CG-stage. In this way, a noise figure (NF) close to or below 3 dB can be achieved, while good linearity is possible when the CS-stage is carefully optimized. We show that a CS-stage with deep submicron transistors can have high IIP2, because the nugsldr nuds cross-term in a two-dimensional Taylor approximation of the IDS(VGS, VDS) characteristic can cancel the traditionally dominant square-law term in the IDS(VGS) relation at practical gain values. Using standard 65 nm transistors at 1.2 V supply voltage, we realize a balun-LNA with 15 dB gain, NF < 3.5 dB and IIP2 > +20 dBm, while simultaneously achieving an IIP3 > 0 dBm. The best performance of the balun is achieved between 300 MHz to 3.5 GHz with gain and phase errors below 0.3 dB and plusmn2 degrees. The total power consumption is 21 mW, while the active area is only 0.01 mm2.
A novel power optimization technique for ultra-low power RFICs This paper presents a novel power optimization technique for ultra-low power (ULP) RFICs. A new figure of merit, namely the gmfT-to-current ratio, (gmfT/ID), is defined for a MOS transistor, which accounts for both the unity-gain frequency and current consumption. It is demonstrated both analytically and experimentally that the gmfT/ID reaches its maximum value in moderate inversion region. Next, using the proposed method, a power optimized common-gate low-noise amplifier (LNA) with active load has been designed and fabricated in a CMOS 0.18μm process operating at 950MHz. Measurement results show a noise-figure (NF) of 4.9dB and a small signal gain of 15.6dB with a record-breaking power dissipation of only 100μW.
A Single–Chip 10-Band WCDMA/HSDPA 4-Band GSM/EDGE SAW-less CMOS Receiver With DigRF 3G Interface and ${+}$ 90 dBm IIP2 This paper describes the design and performance of a 90 nm CMOS SAW-less receiver with DigRF interface that supports 10 WCDMA bands (I, II, III, IV, V, VI, VIII, IX, X, XI) and 4 GSM bands (GSM850, EGSM900, DCS1800, PCS1900). The receiver is part of a single-chip SAW-less transceiver reference platform IC for mass-market smartphones, which has been designed to meet Category 10 HSDPA (High Speed Do...
Current-Recycling Complex Filter for Bluetooth-Low-Energy Applications This brief presents a flexible gm-C complex filter for Bluetooth-low-energy receivers. Each complex pole is realized sharing a bias current among transconductors, leading to an ultralow-power solution. The filter, which is designed in the 130-nm complimentary metal–oxide–semiconductor technology, has a power dissipation of only 42 μW from a 0.8-V supply voltage. With a center frequency and a passband of 2 and 1 MHz, respectively, the simulated filter provides a rejection of the adjacent channel of 34 dB and a spurious-free dynamic range of 52.7 dB.
A CMOS dual-switching power-supply modulator with 8% efficiency improvement for 20MHz LTE Envelope Tracking RF power amplifiers Envelope Tracking (ET) is an efficiency enhancement technique where the power supply of a linear RF power amplifier (PA) follows the envelope of the RF signal. It is specifically effective for the high peak-to-average power ratio (PAPR) signals of modern communication systems. But due to envelope bandwidth expansion and watt-level power requirements, the design of wide bandwidth and high efficiency power supply modulators remains one of the most challenging aspects of ET systems.
Halide: a language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines Image processing pipelines combine the challenges of stencil computations and stream programs. They are composed of large graphs of different stencil stages, as well as complex reductions, and stages with global or data-dependent access patterns. Because of their complex structure, the performance difference between a naive implementation of a pipeline and an optimized one is often an order of magnitude. Efficient implementations require optimization of both parallelism and locality, but due to the nature of stencils, there is a fundamental tension between parallelism, locality, and introducing redundant recomputation of shared values. We present a systematic model of the tradeoff space fundamental to stencil pipelines, a schedule representation which describes concrete points in this space for each stage in an image processing pipeline, and an optimizing compiler for the Halide image processing language that synthesizes high performance implementations from a Halide algorithm and a schedule. Combining this compiler with stochastic search over the space of schedules enables terse, composable programs to achieve state-of-the-art performance on a wide range of real image processing pipelines, and across different hardware architectures, including multicores with SIMD, and heterogeneous CPU+GPU execution. From simple Halide programs written in a few hours, we demonstrate performance up to 5x faster than hand-tuned C, intrinsics, and CUDA implementations optimized by experts over weeks or months, for image processing applications beyond the reach of past automatic compilers.
Multiparadigm Programming in Mozart/Oz, Second International Conference, MOZ 2004, Charleroi, Belgium, October 7-8, 2004, Revised Selected and Invited Papers
PUMP: a programmable unit for metadata processing We introduce the Programmable Unit for Metadata Processing (PUMP), a novel software-hardware element that allows flexible computation with uninterpreted metadata alongside the main computation with modest impact on runtime performance (typically 10--40% for single policies, compared to metadata-free computation on 28 SPEC CPU2006 C, C++, and Fortran programs). While a host of prior work has illustrated the value of ad hoc metadata processing for specific policies, we introduce an architectural model for extensible, programmable metadata processing that can handle arbitrary metadata and arbitrary sets of software-defined rules in the spirit of the time-honored 0-1-∞ rule. Our results show that we can match or exceed the performance of dedicated hardware solutions that use metadata to enforce a single policy, while adding the ability to enforce multiple policies simultaneously and achieving flexibility comparable to software solutions for metadata processing. We demonstrate the PUMP by using it to support four diverse safety and security policies---spatial and temporal memory safety, code and data taint tracking, control-flow integrity including return-oriented-programming protection, and instruction/data separation---and quantify the performance they achieve, both singly and in combination.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.01602
0.008986
0.007435
0.006294
0.003665
0.00211
0.000325
0.000047
0.000006
0
0
0
0
0
A CMOS 1.2-V Hybrid Current- and Voltage-Mode Three-Way Digital Doherty PA With Built-In Phase Nonlinearity Compensation This article presents a fully integrated hybrid current- and voltage-mode three-way digital Doherty power amplifier (PA) in CMOS using a single supply with built-in large-signal phase nonlinearity compensation. The nonlinear phase responses in both current-mode digital PA (C-DPA) and voltage-mode digital PA (V-DPA) are theoretically analyzed, which are further leveraged to achieve the PA nonlinear phase compensation. To the best of our knowledge, it is the first time the amplitude modulation (AM)–phase modulation (PM) response in V-DPA is explained in detail. The PA exploits the lagging AM–PM distortion of C-DPA and leading AM–PM distortion of V-DPAs, resulting in built-in large-signal phase nonlinearity cancellation without phase pre-distortion. Moreover, the transformer-based Doherty output network provides active load modulation to both C-DPA and V-DPA, achieving a three-way Doherty PA for deep power back-off (PBO) efficiency enhancement without supply modulation. Finally, the 1.2-V single power supply for PA power cells and input drivers reduces supply complexity. As a proof-of-concept, the hybrid three-way digital Doherty PA is implemented in a 45-nm CMOS SOI process. The measurements show a 38.5% peak drain efficiency (DE), 22.4-dBm saturated output power ( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${P}$ </tex-math></inline-formula> sat), and 32.1%/18.7% DE at 3.4-/9.3-dB PBO, achieving <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\times $ </tex-math></inline-formula> 1.25/ <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\times $ </tex-math></inline-formula> 1.46 PBO efficiency enhancement over idealistic class-B PA PBO efficiency at 2.3 GHz. The measured AM–PM distortion is 4.7° which is state-of-the-art among DPAs with PBO efficiency enhancement. The hybrid digital Doherty PA supports 40-/10-MSym/s 64-quadrature amplitude modulation (QAM)/256-QAM at 15.3-/15.2-dBm average output power and 24.7%/23.2% average PA DE while maintaining error vector magnitude (EVM) and adjacent channel leakage ratio (ACLR) lower than −32/−32.1 dB and −30.6/−28.9 dBc without phase pre-distortion.
A Watt-Level Quadrature Class-G Switched-Capacitor Power Amplifier With Linearization Techniques A 30.1-dBm quadrature transmitter is implemented based on Class-G IQ-cell-shared switched-capacitor power amplifier (SCPA) and voltage mismatch compensation techniques for dual-supply voltage in a Class-G SCPA. For the Class-G operation in the IQ-cell-shared quadrature SCPA, a merged cell switching (MCS) technique comprising vector amplitude switching (VAS) and vector phase switching (VPS) is proposed. The VAS boosts system efficiency (SE) by enabling the Class-G operation with multiple supply voltages and the VPS conserves vector information in the merged cells that process the quadrature vectors. The linearization technique for Class-G SCPA minimizes the distortion that arises from the supply-voltage mismatches in a multiple supply-voltage system. Two SCPAs are coupled with a power combining transformer to achieve a watt-level output power. A time-domain interpolation minimizes the spectral image. The prototype SCPA, fabricated in a 65-nm CMOS, achieves peak output power and SE of 30.1 dBm and 37.0%, respectively. It achieves error vector magnitude (EVM) of −40.7 dB (−40.3 dB) and SE of 14.7% (18.3%) at an average output power of 19.5 dBm (22.5 dBm) with 802.11g 64-QAM OFDM signal with 10.6 dB peak-to-average power ratio (PAPR) (20-MHz single-carrier 256-QAM signal with 7.6-dB PAPR).
All digital-quadrature-modulator based wideband wireless transmitters A novel architecture for a fully digital wideband wireless transmitter is presented. The proposed structure replaces high-dynamic-range analog circuits with high-speed digital circuits and offers a simple and flexible architecture, which requires less area, consumes less power, and delivers higher performance compared to those of the conventional modulators used for wideband systems. The design is based on a standard 65-nm CMOS process and is suitable for integration with a digital signal processor, memory, and logic implemented in such a process. The presented transmitter is based on a novel digital quadrature modulator (DQM), which achieves digital modulation in a Cartesian coordinate system. The novel architecture employs a single converter, referred to as the differential-like digital-to-RF converter (DDRC), as it is based on fully digitally combining the quadrature baseband signals. The DDRC, at the heart of the DQM, combines functionalities of a mixer, a digital-to-analog converter, and an RF filter into a single circuit. The total area for the digital blocks is 0.04 mm2, with a power consumption of roughly 5 mW. It is shown that the proposed transmitter meets the spectral mask, defined in the targeted IEEE 802.16e (WiMAX) standard, with a margin of 20 dB and achieves an error-vector-magnitude (EVM) performance of --36 dB with a margin of 6 dB.
A Fully-Integrated High-Power Linear CMOS Power Amplifier With a Parallel-Series Combining Transformer. In this paper, a linear CMOS power amplifier (PA) with high output power (34-dBm saturated output power) for high data-rate mobile applications is introduced. The PA incorporates a parallel combination of four differential PA cores to generate high output power with good efficiency and linearity. To implement an efficient on-chip power combiner in a small form-factor, we propose a parallel-series ...
Voltage Mode Doherty Power Amplifier. This paper presents a new wideband Doherty amplifier technique that can achieve high efficiency while maintaining excellent linearity. By modifying a “forgotten” topology originally proposed by Doherty, a new Doherty amplifier architecture is realized with two voltage mode power amplifiers (PAs) and transformers, thus eliminating a narrowband impedance inverter. The voltage mode PA is implemented ...
A Multimode Transmitter in 0.13 μm CMOS Using Direct-Digital RF Modulator This paper presents a system-independent transmitter architecture based on a direct-digital RF-modulator which combines the D/A conversion, up-conversion, unwanted sideband rejection, power control, and part of the digital image-rejection filtering into a single mixed-signal circuit block. The multimode capability of the architecture is demonstrated with WCDMA, EDGE, and WLAN system requirements. ...
Impossibility of distributed consensus with one faulty process The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the “Byzantine Generals” problem.
A fast and elitist multiobjective genetic algorithm: NSGA-II Multi-objective evolutionary algorithms (MOEAs) that use non-dominated sorting and sharing have been criticized mainly for: (1) their O(MN3) computational complexity (where M is the number of objectives and N is the population size); (2) their non-elitism approach; and (3) the need to specify a sharing parameter. In this paper, we suggest a non-dominated sorting-based MOEA, called NSGA-II (Non-dominated Sorting Genetic Algorithm II), which alleviates all of the above three difficulties. Specifically, a fast non-dominated sorting approach with O(MN2) computational complexity is presented. Also, a selection operator is presented that creates a mating pool by combining the parent and offspring populations and selecting the best N solutions (with respect to fitness and spread). Simulation results on difficult test problems show that NSGA-II is able, for most problems, to find a much better spread of solutions and better convergence near the true Pareto-optimal front compared to the Pareto-archived evolution strategy and the strength-Pareto evolutionary algorithm - two other elitist MOEAs that pay special attention to creating a diverse Pareto-optimal front. Moreover, we modify the definition of dominance in order to solve constrained multi-objective problems efficiently. Simulation results of the constrained NSGA-II on a number of test problems, including a five-objective, seven-constraint nonlinear problem, are compared with another constrained multi-objective optimizer, and the much better performance of NSGA-II is observed
Language-based information-flow security Current standard security practices do not provide substantial assurance that the end-to-end behavior of a computing system satisfies important security policies such as confidentiality. An end-to-end confidentiality policy might assert that secret input data cannot be inferred by an attacker through the attacker's observations of system output; this policy regulates information flow. Conventional security mechanisms such as access control and encryption do not directly address the enforcement of information-flow policies. Previously, a promising new approach has been developed: the use of programming-language techniques for specifying and enforcing information-flow policies. In this paper, we survey the past three decades of research on information-flow security, particularly focusing on work that uses static program analysis to enforce information-flow policies. We give a structured view of work in the area and identify some important open challenges.
An Introduction To Compressive Sampling Conventional approaches to sampling signals or images follow Shannon&#39;s theorem: the sampling rate must be at least twice the maximum frequency present in the signal (Nyquist rate). In the field of data conversion, standard analog-to-digital converter (ADC) technology implements the usual quantized Shannon representation - the signal is uniformly sampled at or above the Nyquist rate. This article s...
QsCores: trading dark silicon for scalable energy efficiency with quasi-specific cores Transistor density continues to increase exponentially, but power dissipation per transistor is improving only slightly with each generation of Moore's law. Given the constant chip-level power budgets, this exponentially decreases the percentage of transistors that can switch at full frequency with each technology generation. Hence, while the transistor budget continues to increase exponentially, the power budget has become the dominant limiting factor in processor design. In this regime, utilizing transistors to design specialized cores that optimize energy-per-computation becomes an effective approach to improve system performance. To trade transistors for energy efficiency in a scalable manner, we propose Quasi-specific Cores, or QsCores, specialized processors capable of executing multiple general-purpose computations while providing an order of magnitude more energy efficiency than a general-purpose processor. The QsCores design flow is based on the insight that similar code patterns exist within and across applications. Our approach exploits these similar code patterns to ensure that a small set of specialized cores support a large number of commonly used computations. We evaluate QsCores's ability to target both a single application library (e.g., data structures) as well as a diverse workload consisting of applications selected from different domains (e.g., SPECINT, EEMBC, and Vision). Our results show that QsCores can provide 18.4 x better energy efficiency than general-purpose processors while reducing the amount of specialized logic required to support the workload by up to 66%.
A 41-phase switched-capacitor power converter with 3.8mV output ripple and 81% efficiency in baseline 90nm CMOS.
P2P-Based Service Distribution over Distributed Resources Dynamic or demand-driven service deployment in a Grid or Cloud environment is an important issue considering the varying nature of demand. Most distributed frameworks either offer static service deployment which results in resource allocation problems, or, are job-based where for each invocation, the job along with the data has to be transferred for remote execution resulting in increased communication cost. An alternative approach is dynamic demand-driven provisioning of services as proposed in earlier literature, but the proposed methods fail to account for the volatility of resources in a Grid environment. In this paper, we propose a unique peer-to-peer based approach for dynamic service provisioning which incorporates a Bit-Torrent like protocol for provisioning the service on a remote node. Being built around a P2P model, the proposed framework caters to resource volatility and also incurs lower provisioning cost.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.066667
0.033333
0.033333
0.033333
0.022222
0.013333
0
0
0
0
0
0
0
0
Spiking Neural Networks Hardware Implementations and Challenges: A Survey Neuromorphic computing is henceforth a major research field for both academic and industrial actors. As opposed to Von Neumann machines, brain-inspired processors aim at bringing closer the memory and the computational elements to efficiently evaluate machine learning algorithms. Recently, spiking neural networks, a generation of cognitive algorithms employing computational primitives mimicking neuron and synapse operational principles, have become an important part of deep learning. They are expected to improve the computational performance and efficiency of neural networks, but they are best suited for hardware able to support their temporal dynamics. In this survey, we present the state of the art of hardware implementations of spiking neural networks and the current trends in algorithm elaboration from model selection to training mechanisms. The scope of existing solutions is extensive; we thus present the general framework and study on a case-by-case basis the relevant particularities. We describe the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level and discuss their related advantages and challenges.
A 65-nm Neuromorphic Image Classification Processor With Energy-Efficient Training Through Direct Spike-Only Feedback Recent advances in neural network (NN) and machine learning algorithms have sparked a wide array of research in specialized hardware, ranging from high-performance NN accelerators for use inside the server systems to energy-efficient edge computing systems. While most of these studies have focused on designing inference engines, implementing the training process of an NN for energy-constrained mobile devices has remained to be a challenge due to the requirement of higher numerical precision. In this article, we aim to build an on-chip learning system that would show highly energy-efficient training for NNs without degradation in the performance for machine learning tasks. To achieve this goal, we adapt and optimize a neuromorphic learning algorithm and propose hardware design techniques to fully exploit the properties of the modifications. We verify that our system achieves energy-efficient training with only 7.5% more energy consumption compared with its highly efficient inference of 236 nJ/image on the handwritten digit [Modified National Institute of Standards and Technology database (MNIST)] images. Moreover, our system achieves 97.83% classification accuracy on the MNIST test data set, which outperforms prior neuromorphic on-chip learning systems and is close to the performance of the conventional method for training deep neural networks (NNs), the backpropagation.
From few to many: illumination cone models for face recognition under variable lighting and pose We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render驴or synthesize驴images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone (based on Euclidean distance within the image space). We test our face recognition method on 4,050 images from the Yale Face Database B; these images contain 405 viewing conditions (9 poses $\times$ 45 illumination conditions) for 10 individuals. The method performs almost without error, except on the most extreme lighting directions, and significantly outperforms popular recognition methods that do not use a generative model.
A 1000 fps Vision Chip Based on a Dynamically Reconfigurable Hybrid Architecture Comprising a PE Array Processor and Self-Organizing Map Neural Network This paper proposes a vision chip hybrid architecture with dynamically reconfigurable processing element (PE) array processor and self-organizing map (SOM) neural network. It integrates a high speed CMOS image sensor, three von Neumann-type processors, and a non-von Neumann-type bio-inspired SOM neural network. The processors consist of a pixel-parallel PE array processor with O(N×N) parallelism, a row-parallel row-processor (RP) array processor with O(N) parallelism and a thread-parallel dual-core microprocessor unit (MPU) with O(2) parallelism. They execute low-, mid- and high-level image processing, respectively. The SOM network speeds up high-level processing in pattern recognition tasks by O(N/4×N/4), which improves the chip performance remarkably. The SOM network can be dynamically reconfigured from the PE array to largely save chip area. A prototype chip with a 256 × 256 image sensor, a reconfigurable 64 × 64 PE array processor/16 × 16 SOM network, a 64 × 1 RP array processor and a dual-core 32-bit MPU was implemented in a 0.18 μm CMOS image sensor process. The chip can perform image capture and various-level image processing at a high speed and in flexible fashion. Various complicated applications including M-S functional solution, horizon estimation, hand gesture recognition, face recognition are demonstrated at high speed from several hundreds to >1000 fps.
Efficient FPGA Implementations of Pair and Triplet-Based STDP for Neuromorphic Architectures Synaptic plasticity is envisioned to bring about learning and memory in the brain. Various plasticity rules have been proposed, among which spike-timing-dependent plasticity (STDP) has gained the highest interest across various neural disciplines, including neuromorphic engineering. Here, we propose highly efficient digital implementations of pair-based STDP (PSTDP) and triplet-based STDP (TSTDP) on field programmable gate arrays that do not require dedicated floating-point multipliers and hence need minimal hardware resources. The implementations are verified by using them to replicate a set of complex experimental data, including those from pair, triplet, quadruplet, frequency-dependent pairing, as well as Bienenstock–Cooper–Munro experiments. We demonstrate that the proposed TSTDP design has a higher operating frequency that leads to <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.46\times $ </tex-math></inline-formula> faster weight adaptation (learning) and achieves 11.55 folds improvement in resource usage, compared to a recent implementation of a calcium-based plasticity rule capable of exhibiting similar learning performance. In addition, we show that the proposed PSTDP and TSTDP designs, respectively, consume <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.38\times $ </tex-math></inline-formula> and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1.78\times $ </tex-math></inline-formula> less resources than the most efficient PSTDP implementation in the literature. As a direct result of the efficiency and powerful synaptic capabilities of the proposed learning modules, they could be integrated into large-scale digital neuromorphic architectures to enable high-performance STDP learning.
Tianjic: A Unified and Scalable Chip Bridging Spike-Based and Continuous Neural Computation Toward the long-standing dream of artificial intelligence, two successful solution paths have been paved: 1) neuromorphic computing and 2) deep learning. Recently, they tend to interact for simultaneously achieving biological plausibility and powerful accuracy. However, models from these two domains have to run on distinct substrates, i.e., neuromorphic platforms and deep learning accelerators, respectively. This architectural incompatibility greatly compromises the modeling flexibility and hinders promising interdisciplinary research. To address this issue, we build a unified model description framework and a unified processing architecture (Tianjic), which covers the full stack from software to hardware. By implementing a set of integration and transformation operations, Tianjic is able to support spiking neural networks, biological dynamic neural networks, multilayered perceptron, convolutional neural networks, recurrent neural networks, and so on. A compatible routing infrastructure enables homogeneous and heterogeneous scalability on a decentralized many-core network. Several optimization methods are incorporated, such as resource and data sharing, near-memory processing, compute/access skipping, and intra-/inter-core pipeline, to improve performance and efficiency. We further design streaming mapping schemes for efficient network deployment with a flexible tradeoff between execution throughput and resource overhead. A 28-nm prototype chip is fabricated with >610-GB/s internal memory bandwidth. A variety of benchmarks are evaluated and compared with GPUs and several existing specialized platforms. In summary, the fully unfolded mapping can achieve significantly higher throughput and power efficiency; the semi-folded mapping can save 30x resources while still presenting comparable performance on average. Finally, two hybrid-paradigm examples, a multimodal unmanned bicycle and a hybrid neural network, are demonstrated to show the potential of our unified architecture. This article paves a new way to explore neural computing.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
GPUWattch: enabling energy optimizations in GPGPUs General-purpose GPUs (GPGPUs) are becoming prevalent in mainstream computing, and performance per watt has emerged as a more crucial evaluation metric than peak performance. As such, GPU architects require robust tools that will enable them to quickly explore new ways to optimize GPGPUs for energy efficiency. We propose a new GPGPU power model that is configurable, capable of cycle-level calculations, and carefully validated against real hardware measurements. To achieve configurability, we use a bottom-up methodology and abstract parameters from the microarchitectural components as the model's inputs. We developed a rigorous suite of 80 microbenchmarks that we use to bound any modeling uncertainties and inaccuracies. The power model is comprehensively validated against measurements of two commercially available GPUs, and the measured error is within 9.9% and 13.4% for the two target GPUs (GTX 480 and Quadro FX5600). The model also accurately tracks the power consumption trend over time. We integrated the power model with the cycle-level simulator GPGPU-Sim and demonstrate the energy savings by utilizing dynamic voltage and frequency scaling (DVFS) and clock gating. Traditional DVFS reduces GPU energy consumption by 14.4% by leveraging within-kernel runtime variations. More finer-grained SM cluster-level DVFS improves the energy savings from 6.6% to 13.6% for those benchmarks that show clustered execution behavior. We also show that clock gating inactive lanes during divergence reduces dynamic power by 11.2%.
Disk Paxos We present an algorithm, called Disk Paxos, for implementing a reliable distributed system with a network of processors and disks. Like the original Paxos algorithm, Disk Paxos maintains consistency in the presence of arbitrary non-Byzantine faults. Progress can be guaranteed as long as a majority of the disks are available, even if all processors but one have failed.
Winnowing: local algorithms for document fingerprinting Digital content is for copying: quotation, revision, plagiarism, and file sharing all create copies. Document fingerprinting is concerned with accurately identifying copying, including small partial copies, within large sets of documents.We introduce the class of local document fingerprinting algorithms, which seems to capture an essential property of any finger-printing technique guaranteed to detect copies. We prove a novel lower bound on the performance of any local algorithm. We also develop winnowing, an efficient local fingerprinting algorithm, and show that winnowing's performance is within 33% of the lower bound. Finally, we also give experimental results on Web data, and report experience with MOSS, a widely-used plagiarism detection service.
Cache Games -- Bringing Access-Based Cache Attacks on AES to Practice Side channel attacks on cryptographic systems exploit information gained from physical implementations rather than theoretical weaknesses of a scheme. In recent years, major achievements were made for the class of so called access-driven cache attacks. Such attacks exploit the leakage of the memory locations accessed by a victim process. In this paper we consider the AES block cipher and present an attack which is capable of recovering the full secret key in almost real time for AES-128, requiring only a very limited number of observed encryptions. Unlike previous attacks, we do not require any information about the plaintext (such as its distribution, etc.). Moreover, for the first time, we also show how the plaintext can be recovered without having access to the cipher text at all. It is the first working attack on AES implementations using compressed tables. There, no efficient techniques to identify the beginning of AES rounds is known, which is the fundamental assumption underlying previous attacks. We have a fully working implementation of our attack which is able to recover AES keys after observing as little as 100 encryptions. It works against the OpenS SL 0.9.8n implementation of AES on Linux systems. Our spy process does not require any special privileges beyond those of a standard Linux user. A contribution of probably independent interest is a denial of service attack on the task scheduler of current Linux systems (CFS), which allows one to observe (on average) every single memory access of a victim process.
A 60-GHz 16QAM/8PSK/QPSK/BPSK Direct-Conversion Transceiver for IEEE802.15.3c. This paper presents a 60-GHz direct-conversion transceiver using 60-GHz quadrature oscillators. The transceiver has been fabricated in a standard 65-nm CMOS process. It in cludes a receiver with a 17.3-dB conversion gain and less than 8.0-dB noise figure, a transmitter with a 18.3-dB conversion gain, a 9.5-dBm output 1 dB compression point, a 10.9-dBm saturation output power and 8.8-% power added ...
CCFI: Cryptographically Enforced Control Flow Integrity Control flow integrity (CFI) restricts jumps and branches within a program to prevent attackers from executing arbitrary code in vulnerable programs. However, traditional CFI still offers attackers too much freedom to chose between valid jump targets, as seen in recent attacks. We present a new approach to CFI based on cryptographic message authentication codes (MACs). Our approach, called cryptographic CFI (CCFI), uses MACs to protect control flow elements such as return addresses, function pointers, and vtable pointers. Through dynamic checks, CCFI enables much finer-grained classification of sensitive pointers than previous approaches, thwarting all known attacks and resisting even attackers with arbitrary access to program memory. We implemented CCFI in Clang/LLVM, taking advantage of recently available cryptographic CPU instructions (AES-NI). We evaluate our system on several large software packages (including nginx, Apache and memcache) as well as all their dependencies. The cost of protection ranges from a 3--18% decrease in server request rate. We also expect this overhead to shrink as Intel improves the performance AES-NI.
Neuropixels Data-Acquisition System: A Scalable Platform for Parallel Recording of 10,000+ Electrophysiological Signals. Although CMOS fabrication has enabled a quick evolution in the design of high-density neural probes and neural-recording chips, the scaling and miniaturization of the complete data-acquisition systems has happened at a slower pace. This is mainly due to the complexity and the many requirements that change depending on the specific experimental settings. In essence, the fundamental challenge of a n...
1.1
0.1
0.1
0.1
0.1
0.05
0
0
0
0
0
0
0
0
7.6 A 65nm 236.5nJ/Classification Neuromorphic Processor with 7.5% Energy Overhead On-Chip Learning Using Direct Spike-Only Feedback Advances in neural network and machine learning algorithms have sparked a wide array of research in specialized hardware, ranging from high-performance convolutional neural network (CNN) accelerators to energy-efficient deep-neural network (DNN) edge computing systems [1]. While most studies have focused on designing inference engines, recent works have shown that on-chip training could serve practical purposes such as compensating for process variations of in-memory computing [2] or adapting to changing environments in real time [3]. However, these successes were limited to relatively simple tasks mainly due to the large energy overhead of the training process. These problems arise primarily from the high-precision arithmetic and memory required for error propagation and weight updates, in contrast to error-tolerant inference operation; the capacity requirements of a learning system are significantly higher than those of an inference system [4].
Unsupervised AER Object Recognition Based on Multiscale Spatio-Temporal Features and Spiking Neurons This article proposes an unsupervised address event representation (AER) object recognition approach. The proposed approach consists of a novel multiscale spatio-temporal feature (MuST) representation of input AER events and a spiking neural network (SNN) using spike-timing-dependent plasticity (STDP) for object recognition with MuST. MuST extracts the features contained in both the spatial and temporal information of AER event flow, and forms an informative and compact feature spike representation. We show not only how MuST exploits spikes to convey information more effectively, but also how it benefits the recognition using SNN. The recognition process is performed in an unsupervised manner, which does not need to specify the desired status of every single neuron of SNN, and thus can be flexibly applied in real-world recognition tasks. The experiments are performed on five AER datasets including a new one named GESTURE-DVS. Extensive experimental results show the effectiveness and advantages of the proposed approach.
DART: Distribution Aware Retinal Transform for Event-Based Cameras We introduce a generic visual descriptor, termed as distribution aware retinal transform (DART), that encodes the structural context using log-polar grids for event cameras. The DART descriptor is applied to four different problems, namely object classification, tracking, detection and feature matching: (1) The DART features are directly employed as local descriptors in a bag-of-words classification framework and testing is carried out on four standard event-based object datasets (N-MNIST, MNIST-DVS, CIFAR10-DVS, NCaltech-101); (2) Extending the classification system, tracking is demonstrated using two key novelties: (i) Statistical bootstrapping is leveraged with online learning for overcoming the low-sample problem during the one-shot learning of the tracker, (ii) Cyclical shifts are induced in the log-polar domain of the DART descriptor to achieve robustness to object scale and rotation variations; (3) To solve the long-term object tracking problem, an object detector is designed using the principle of cluster majority voting. The detection scheme is then combined with the tracker to result in a high intersection-over-union score with augmented ground truth annotations on the publicly available event camera dataset; (4) Finally, the event context encoded by DART greatly simplifies the feature correspondence problem, especially for spatio-temporal slices far apart in time, which has not been explicitly tackled in the event-based vision domain.
Networks of spiking neurons: the third generation of neural network models The computational power of formal models for networks of spiking neurons is compared with that of other neural network models based on McCulloch Pitts neurons (i.e., threshold gates), respectively, sigmoidal gates. In particular it is shown that networks of spiking neurons are, with regard to the number of neurons that are needed, computationally more powerful than these other neural network models. A concrete biologically relevant function is exhibited which can be computed by a single spiking neuron (for biologically reasonable values of its parameters), but which requires hundreds of hidden units on a sigmoidal neural net. On the other hand, it is known that any function that can be computed by a small sigmoidal neural net can also be computed by a small network of spiking neurons. This article does not assume prior knowledge about spiking neurons, and it contains an extensive list of references to the currently available literature on computations in networks of spiking neurons and relevant results from neurobiology.
STDP-Based Pruning of Connections and Weight Quantization in Spiking Neural Networks for Energy-Efficient Recognition Spiking neural networks (SNNs) with a large number of weights and varied weight distribution can be difficult to implement in emerging in-memory computing hardware due to the limitations on crossbar size (implementing dot product), the constrained number of conductance states in non-CMOS devices and the power budget. We present a sparse SNN topology where noncritical connections are pruned to reduce the network size, and the remaining critical synapses are weight quantized to accommodate for limited conductance states. Pruning is based on the power law weight-dependent spike timing dependent plasticity model; synapses between pre- and post-neuron with high spike correlation are retained, whereas synapses with low correlation or uncorrelated spiking activity are pruned. The weights of the retained connections are quantized to the available number of conductance states. The process of pruning noncritical connections and quantizing the weights of critical synapses is performed at regular intervals during training. We evaluated our sparse and quantized network on MNIST dataset and on a subset of images from Caltech-101 dataset. The compressed topology achieved a classification accuracy of 90.1% (91.6%) on the MNIST (Caltech-101) dataset with 3.1X (2.2X) and 4X (2.6X) improvement in energy and area, respectively. The compressed topology is energy and area efficient while maintaining the same classification accuracy of a 2-layer fully connected SNN topology.
Neurogrid: A Mixed-Analog-Digital Multichip System for Large-Scale Neural Simulations. In this paper, we describe the design of Neurogrid, a neuromorphic system for simulating large-scale neural models in real time. Neuromorphic systems realize the function of biological neural systems by emulating their structure. Designers of such systems face three major design choices: 1) whether to emulate the four neural elements-axonal arbor, synapse, dendritic tree, and soma-with dedicated o...
ISAAC: A Convolutional Neural Network Accelerator with In-Situ Analog Arithmetic in Crossbars. A number of recent efforts have attempted to design accelerators for popular machine learning algorithms, such as those involving convolutional and deep neural networks (CNNs and DNNs). These algorithms typically involve a large number of multiply-accumulate (dot-product) operations. A recent project, DaDianNao, adopts a near data processing approach, where a specialized neural functional unit performs all the digital arithmetic operations and receives input weights from adjacent eDRAM banks. This work explores an in-situ processing approach, where memristor crossbar arrays not only store input weights, but are also used to perform dot-product operations in an analog manner. While the use of crossbar memory as an analog dot-product engine is well known, no prior work has designed or characterized a full-fledged accelerator based on crossbars. In particular, our work makes the following contributions: (i) We design a pipelined architecture, with some crossbars dedicated for each neural network layer, and eDRAM buffers that aggregate data between pipeline stages. (ii) We define new data encoding techniques that are amenable to analog computations and that can reduce the high overheads of analog-to-digital conversion (ADC). (iii) We define the many supporting digital components required in an analog CNN accelerator and carry out a design space exploration to identify the best balance of memristor storage/compute, ADCs, and eDRAM storage on a chip. On a suite of CNN and DNN workloads, the proposed ISAAC architecture yields improvements of 14.8×, 5.5×, and 7.5× in throughput, energy, and computational density (respectively), relative to the state-of-the-art DaDianNao architecture.
Information-driven dynamic sensor collaboration This article overviews the information-driven approach to sensor collaboration in ad hoc sensor networks. The main idea is for a network to determine participants in a "sensor collaboration" by dynamically optimizing the information utility of data for a given cost of communication and computation. A definition of information utility is introduced, and several approximate measures of the information utility are developed for reasons of computational tractability. We illustrate the use of this approach using examples drawn from tracking applications
MorphoSys: An Integrated Reconfigurable System for Data-Parallel and Computation-Intensive Applications This paper introduces MorphoSys, a reconfigurable computing system developed to investigate the effectiveness of combining reconfigurable hardware with general-purpose processors for word-level, computation-intensive applications. MorphoSys is a coarse-grain, integrated, and reconfigurable system-on-chip, targeted at high-throughput and data-parallel applications. It is comprised of a reconfigurable array of processing cells, a modified RISC processor core, and an efficient memory interface unit. This paper describes the MorphoSys architecture, including the reconfigurable processor array, the control processor, and data and configuration memories. The suitability of MorphoSys for the target application domain is then illustrated with examples such as video compression, data encryption and target recognition. Performance evaluation of these applications indicates improvements of up to an order of magnitude (or more) on MorphoSys, in comparison with other systems.
Adaptive Synchronization of an Uncertain Complex Dynamical Network This brief paper further investigates the locally and globally adaptive synchronization of an uncertain complex dynamical network. Several network synchronization criteria are deduced. Especially, our hypotheses and designed adaptive controllers for network synchronization are rather simple in form. It is very useful for future practical engineering design. Moreover, numerical simulations are also given to show the effectiveness of our synchronization approaches.
Cache Games -- Bringing Access-Based Cache Attacks on AES to Practice Side channel attacks on cryptographic systems exploit information gained from physical implementations rather than theoretical weaknesses of a scheme. In recent years, major achievements were made for the class of so called access-driven cache attacks. Such attacks exploit the leakage of the memory locations accessed by a victim process. In this paper we consider the AES block cipher and present an attack which is capable of recovering the full secret key in almost real time for AES-128, requiring only a very limited number of observed encryptions. Unlike previous attacks, we do not require any information about the plaintext (such as its distribution, etc.). Moreover, for the first time, we also show how the plaintext can be recovered without having access to the cipher text at all. It is the first working attack on AES implementations using compressed tables. There, no efficient techniques to identify the beginning of AES rounds is known, which is the fundamental assumption underlying previous attacks. We have a fully working implementation of our attack which is able to recover AES keys after observing as little as 100 encryptions. It works against the OpenS SL 0.9.8n implementation of AES on Linux systems. Our spy process does not require any special privileges beyond those of a standard Linux user. A contribution of probably independent interest is a denial of service attack on the task scheduler of current Linux systems (CFS), which allows one to observe (on average) every single memory access of a victim process.
The analysis and improvement of a current-steering DACs dynamic SFDR-I: the cell-dependent delay differences For a high-accuracy current-steering digital-to-analog converters (DACs), the delay differences between the current sources is one of the major reasons that cause bad dynamic performance. In this paper, a mathematical model describing the impact of the delay differences on the DACs SFDR property is presented. The results are verified by comparison to behavioral-level simulations and to actual measurement data from published papers. Based on this analysis, the delay differences cancellation (DDC) technique to reduce the impact of the delay differences on the SFDR property is proposed and verified by simulation results.
A 10-Bit 800-MHz 19-mW CMOS ADC A pipelined ADC employs charge-steering op amps to relax the trade-offs among speed, noise, and power consumption. Applying full-rate nonlinearity and gain error calibration, a prototype realized in 65-nm CMOS technology achieves an SNDR of 52.2 dB at an input frequency of 399.2MHz and an FoM of 53 fJ/conversion-step.
Multi-Channel Neural Recording Implants: A Review. The recently growing progress in neuroscience research and relevant achievements, as well as advancements in the fabrication process, have increased the demand for neural interfacing systems. Brain-machine interfaces (BMIs) have been revealed to be a promising method for the diagnosis and treatment of neurological disorders and the restoration of sensory and motor function. Neural recording implants, as a part of BMI, are capable of capturing brain signals, and amplifying, digitizing, and transferring them outside of the body with a transmitter. The main challenges of designing such implants are minimizing power consumption and the silicon area. In this paper, multi-channel neural recording implants are surveyed. After presenting various neural-signal features, we investigate main available neural recording circuit and system architectures. The fundamental blocks of available architectures, such as neural amplifiers, analog to digital converters (ADCs) and compression blocks, are explored. We cover the various topologies of neural amplifiers, provide a comparison, and probe their design challenges. To achieve a relatively high SNR at the output of the neural amplifier, noise reduction techniques are discussed. Also, to transfer neural signals outside of the body, they are digitized using data converters, then in most cases, the data compression is applied to mitigate power consumption. We present the various dedicated ADC structures, as well as an overview of main data compression methods.
1.105
0.1
0.1
0.055
0.055
0.016571
0.000037
0
0
0
0
0
0
0
A Low Cost UHF RFID System with OCA tag for Short Range Communication A UHF RF identification system, consisting of a fully integrated tag and a special reader, has been developed for short range and harsh size requirement applications. The system is fabricated in the standard 0.18μm CMOS process. The whole tag chip with antenna, analog front-end, baseband and memory, takes up an area of 0.36 mm2, which is smaller than other reported tags with on-chip antenna (OCA) using standard CMOS process. Self-defined protocol is proposed to reduce the power consumption, and minimize the size of the tag. The specialized SOC reader system, consisting of RF transceiver, digital baseband, MCU and host interface, supports both self-defined and ISO 18000-6C protocol. Its power consumption is about 500mW, which is much lower than other reported readers considering the transmission power. Measurement results show that the system’s reading range is 2mm with 20dBm reader output power. Also, it has been verified that the data stored in the OCA tag embedded in a pearl can be successfully read out. With inductive antenna printed on paper substrate around the OCA tag, the reading range can be extended from several centimeters to meters depended on the shape and size of the inductive antenna.
A meter-range UWB transceiver chipset for around-the-head audio streaming Any around-the-body wireless system faces challenging requirements. This is especially true in the case of audio streaming around the head e.g. for wireless audio headsets or hearing-aid devices. The behind-the-ear device typically serves multiple radio links e.g. ear-to-ear, ear-to-pocket (a phone or MP3 player) or even a link between the ear and a remote base station such as a TV. Good audio quality is a prerequisite and mW-range power consumption is compulsory in view of battery size. However, the GHz communication channel typically shows a significant attenuation; for an ear-to-ear link, the attenuation due to the narrowband fade dominates and is in the order of 55 to 65dB [1]. The typically small antennas, close to the human body, add another 10 to 15dB of losses. For the ear-to-pocket and the ear-to-remote link, the losses due to body proximity and antenna size reduce, however the distance increases resulting in a similar link budget requirement of 80dB.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
An Ultra Low Power, RF Energy Harvesting Transceiver for Multiple Node Sensor Application An ultra low power, wirelessly-powered RF transceiver for wireless sensor network is implemented using 180 nm CMOS technology. We propose a 98 μW, 457.5 MHz transmitter with output radiation power of -22 dBm. This transmitter utilizes 915 MHz wirelessly powering RF signal by frequency division using a true-single-phase-clock (TSPC) divider to generate the carrier frequency with very low power consumption and small die area. The transmitter can support up to 5 Mbps data rate. The telemetry system uses an 8-stage Cockcroft-Walton rectifier to convert RF to DC voltage for energy harvesting. The bandgap reference and linear regulators provide stable DC voltage throughout the system. The receiver recovers data from the modulated wireless powering RF signal to perform time division multiple access (TMDA) for the multiple node system. Power consumption of the TDMA receiver is less than 15 μW. Our proposed transmitter and receiver each occupies 0.0018 mm2 and 0.0135 mm2 of active die area, respectively.
An Ultralow-Power Wake-Up Receiver Based on Direct Active RF Detection. An ultralow-power direct active RF detection wake-up receiver (WuRx) is presented. In order to reduce the power consumption and system complexity, a differential RF envelope detector is implemented in a complementary current-reuse architecture. The detector sensitivity is enhanced through an embedded matching network with signal passive amplification. A prototype receiver is fabricated in 0.18-μm ...
0.56 V, –20 dBm RF-Powered, Multi-Node Wireless Body Area Network System-on-a-Chip With Harvesting-Efficiency Tracking Loop A battery-less, multi-node wireless body area network (WBAN) system-on-a-chip (SoC) is demonstrated. An efficiency tracking loop is proposed that adjusts the rectifier's threshold voltage to maximize the wireless harvesting operation, resulting in a minimum RF sensitivity better than -20 dBm at 904.5 MHz. Each SoC node is injection-locked and time-synchronized with the broadcasted RF basestation power (up to a sensitivity of -33 dBm) using an injection-locked frequency divider (ILFD). Hence, every sensor node is phase-locked with the basestation and all nodes can wirelessly transmit TDMA sensor data concurrently. Designed in a 65 nm-CMOS process, the fabricated sensor SoC contains the energy harvesting rectifier and bandgap, duty-cycled ADC, digital logic, as well as the multi-node wireless clock synchronization and MICS-band transmitter. For a broadcasted basestation power of 20 dBm (30 dBm), experimental measurements verify correct powering, sensor reading, and wireless data transfer for a distance of 3 m (9 m). The entire biomedical system application is verified by reception of room and abdominal temperature monitoring.
High-Efficiency Differential-Drive CMOS Rectifier for UHF RFIDs A high-efficiency CMOS rectifier circuit for UHF RFIDs was developed. The rectifier has a cross-coupled bridge configuration and is driven by a differential RF input. A differential-drive active gate bias mechanism simultaneously enables both low ON-resistance and small reverse leakage of diode-connected MOS transistors, resulting in large power conversion efficiency (PCE), especially under small ...
The GPU Computing Era GPU computing is at a tipping point, becoming more widely used in demanding consumer applications and high-performance computing. This article describes the rapid evolution of GPU architectures—from graphics processors to massively parallel many-core multiprocessors, recent developments in GPU computing architectures, and how the enthusiastic adoption of CPU+GPU coprocessing is accelerating parallel applications.
Searching in an unknown environment: an optimal randomized algorithm for the cow-path problem Searching for a goal is a central and extensively studied problem in computer science. In classical searching problems, the cost of a search function is simply the number of queries made to an oracle that knows the position of the goal. In many robotics problems, as well as in problems from other areas, we want to charge a cost proportional to the distance between queries (e.g., the time required to travel between two query points). With this cost function in mind, the abstract problem known as the w -lane cow-path problem was designed. There are known optimal deterministic algorithms for the cow-path problem; we give the first randomized algorithm in this paper. We show that our algorithm is optimal for two paths ( w =2) and give evidence that it is optimal for larger values of w . Subsequent to the preliminary version of this paper, Kao et al. ( in “Proceedings, 5th ACM–SIAM Symposium on Discrete Algorithm,” pp. 372–381, 1994) have shown that our algorithm is indeed optimal for all w ⩾2. Our randomized algorithm gives expected performance that is almost twice as good as is possible with a deterministic algorithm. For the performance of our algorithm, we also derive the asymptotic growth with respect to w —despite similar complexity results for related problems, it appears that this growth has never been analyzed.
Pinning adaptive synchronization of a general complex dynamical network There are two challenging fundamental questions in pinning control of complex networks: (i) How many nodes should a network with fixed network structure and coupling strength be pinned to reach network synchronization? (ii) How much coupling strength should a network with fixed network structure and pinning nodes be applied to realize network synchronization? To fix these two questions, we propose a general complex dynamical network model and then further investigate its pinning adaptive synchronization. Based on this model, we attain several novel adaptive synchronization criteria which indeed give the positive answers to these two questions. That is, we provide a simply approximate formula for estimating the detailed number of pinning nodes and the magnitude of the coupling strength for a given general complex dynamical network. Here, the coupling-configuration matrix and the inner-coupling matrix are not necessarily symmetric. Moreover, our pinning adaptive controllers are rather simple compared with some traditional controllers. A Barabási–Albert network example is finally given to show the effectiveness of the proposed synchronization criteria.
Wireless communications in the twenty-first century: a perspective Wireless communications are expected to be the dominant mode of access technology in the next century. Besides voice, a new range of services such as multimedia, high-speed data, etc. are being offered for delivery over wireless networks. Mobility will be seamless, realizing the concept of persons being in contact anywhere, at any time. Two developments are likely to have a substantial impact on t...
A decentralized modular control framework for robust control of FES-activated walker-assisted paraplegic walking using terminal sliding mode and fuzzy logic control. A major challenge to developing functional electrical stimulation (FES) systems for paraplegic walking and widespread acceptance of these systems is the design of a robust control strategy that provides satisfactory tracking performance. The systems need to be robust against time-varying properties of neuromusculoskeletal dynamics, day-to-day variations, subject-to-subject variations, external dis...
Understanding the regenerative comparator circuit The regenerative comparator circuit which lies at the heart of A/D conversion, slicer circuits, and memory sensing, is unstable, time-varying, nonlinear, and with multiple equilibria. That does not mean, as this paper shows, that it cannot be understood with simple equivalent circuits that reveal its dynamics completely, and enable it to be designed to specifications on static and dynamic offset and noise. The analysis is applied to the StrongArm latch.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.2
0.2
0.2
0.2
0.2
0.1
0.02
0
0
0
0
0
0
0
A 32.75-Gb/s Voltage-Mode Transmitter With Three-Tap FFE in 16-nm CMOS. This paper describes a 32.75-Gb/s voltage-mode transmitter (TX) with three-tap feed forward equalization that is fabricated in a 16-nm FinFET CMOS technology. The TX uses a dual regulator architecture to allow independent control of output swing, output common-mode, and equalization. A hybrid impedance control scheme is presented where the total number of driver slices is used for coarse impedance...
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
An area-efficient multistage 3.0- to 8.5-GHz CMOS UWB LNA using tunable active inductors An area-efficient multistage 3.0- to 8.5-GHz ultra-wideband low-noise amplifier (LNA) utilizing tunable active inductors (AIs) is presented. The AI includes a negative impedance circuit (NIC) consisting of a pair of cross-coupled NMOS transistors and is tuned to vary the gain and bandwidth (BW) of the amplifier. Fabricated in a 90-nm digital CMOS process, the proposed fully on-chip LNA occupies a core chip area of only 0.022 mm2. The measurement results show a power gain S21 of 16.0 dB, a noise figure of 3.1-4.4 dB, and an input return loss S11 of less than -10.5 dB over the 3-dB BW of 3.0-8.5 GHz. Tuning the AIs allows one to increase the gain above 18.0 dB and to extend the BW over 9.4 GHz. The LNA consumes 16.0 mW from a power supply of 1.2 V.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The local detection paradigm and its applications to self-stabilization A new paradigm for the design of self-stabilizing distributed algorithms, called local detection , is introduced. The essence of the paradigm is in defining a local condition based on the state of a processor and its immediate neighborhood such that the system is in a globally legal state if and only if the local condition is satisfied at all the nodes. In this work we also extend the model of self-stabilizing networks traditionally assuming memory failure to include the model of dynamic networks (assuming edge failures and recoveries). We apply the paradigm to the extended model which we call “dynamic self-stabilizing networks”. Without loss of generality, we present the results in the least restrictive shared memory model of read/write atomicity, to which end we construct basic information transfer primitives. Using local detection, we develop deterministic and randomized self-stabilizing algorithms that maintain a rooted spanning tree in a general network whose topology changes dynamically. The deterministic algorithm assumes unique identities while the randomized assumes an anonymous network. The algorithms use a constant number of memory words per edge in each node; and both the size of memory words and of messages is the number of bits necessary to represent a node identity (typically O (log n ) bits where n is the size of the network). These algorithms provide for the easy construction of self-stabilizing protocols for numerous tasks: reset, routing, topology-update and self-stabilization transformers that automatically self-stabilize existing protocols for which local detection conditions can be defined.
Spurious Tone Suppression Techniques Applied to a Wide-Bandwidth 2.4 GHz Fractional- N PLL This paper demonstrates that spurious tones in the output of a fractional-N PLL can be reduced by replacing the DeltaSigma modulator with a new type of digital quantizer and adding a charge pump offset combined with a sampled loop filter. It describes the underlying mechanisms of the spurious tones, proposes techniques that mitigate the effects of the mechanisms, and presents a phase noise cancell...
A 0.5 V 1.1 MS/sec 6.3 fJ/Conversion-Step SAR-ADC With Tri-Level Comparator in 40 nm CMOS This paper presents an extremely low-voltage operation and power efficient successive-approximation-register (SAR) analog-to-digital converter (ADC). Tri-level comparator is proposed to relax the speed requirement of the comparator and decrease the resolution of internal Digital-to-Analog Converter (DAC) by 1-bit. The internal charge redistribution DAC employs unit capacitance of 0.5 fF and ADC operates at nearly thermal noise limitation. To deal with the problem of capacitor mismatch, reconfigurable capacitor array and calibration procedure were developed. The prototype ADC fabricated using 40 nm CMOS process achieves 46.8 dB SNDR and 58.2 dB SFDR with 1.1 MS/sec at 0.5 V power supply. The FoM is 6.3-fJ/conversion step and the chip die area is only 160 μm × 70 μm.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
PAC-PL: Enabling Control-Flow Integrity with Pointer Authentication in FPGA SoC Platforms Control-flow integrity (CFI) is an effective technique to enhance the security of software systems. Processor designers recently started to provide hardware-based support to efficiently implement CFI, such as the pointer authentication (PA) feature provided by ARM starting from ARMv8.3-A processor architectures. These CFI mechanisms are also accompanied by support in the mainline codebase of popular compilers (such as GCC and LLVM) and the Linux operating system. As such, they are expected to establish as widespread security mechanisms. Nevertheless, many commercial chips still do not support hardware-assisted CFI, even some of the ones that just entered the market. This paper presents PAC-PL, a solution to enable hardware-assisted CFI on heterogeneous platforms that include a field-programmable gate array (FPGA) fabric, such as the Xilinx Ultrascale+ and Versal. PAC-PL comes with compiler-and OS-level support, is compatible with ARM’s PA, and enables advanced key management and attack detection strategies. A timing analysis for PAC-PL is also presented. PAC-PL was experimentally evaluated with state-of-the-art benchmarks in terms of run-time overhead, memory footprint, and FPGA resource consumption, resulting in a practical solution for implementing CFI.
Hardware-Assisted Detection of Malicious Software in Embedded Systems One of the critical security threats to computer systems is the execution of malware or malicious software. Several intrusion detection systems have been proposed which perform detection analysis in the software using the audit files generated by the operating system. Software-based solutions to this problem are relatively slow, so these techniques can be used forensically, but not in real-time to stop an exploit before it has an opportunity to do damage. We present a technique to implement intrusion detection for secure embedded systems by detecting behavioral differences between the correct system and the malware. The system is implemented using FPGA logic to enable the detection process to be regularly updated to adapt to new malware and changing system behavior.
Store-and-Forward Buffer Requirements in a Packet Switching Network Previous analytic models for packet switching networks have always assumed infinite storage capacity in store-store-and-forward (S/F) nodes. In this paper, we relax this assumption and present a model for a packet switching network in which each node has a finite pool of S/F buffers. A packet arriving at a node in which all S/F buffers are temporarily filled is discarded. The channel transmission control mechanisms of positive acknowledgment and time-out of packets are included in this model. Individual S/F nodes are analyzed separately as queueing networks with different classes of packets. The single node results are interfaced by imposing a continuity of flow constraint. A heuristic algorithm for determining a balanced assignment of nodal S/F buffer capacities is proposed. Numerical results for the performance of a 19 node network are illustrated.
Safely Preventing Unbounded Delays During Bus Transactions in FPGA-based SoC Advanced eXtensible Interface (AXI) is an open-standard communication bus interface implemented in most commercial off-the-shelf FPGA System-on-Chips (SoC) to exchange data within the chip. Unfortunately, the AXI standard does not mandate any mechanism to detect possible misbehavior of the connected modules. This work shows that this lack of specification has a relevant impact on popular implementations of the AXI bus. In particular, it is shown how it is easily possible to inject arbitrarily-long delays on modern FPGA system-on-chips under the presence of misbehaving bus masters. To safely solve this issue, this paper presents a general timing analysis to bound the execution of periodically-invoked hardware accelerators in nominal conditions. This timing analysis is then used to conFigure a latency-free hardware module named AXI Stall Monitor (ASM), also proposed in this paper, capable of detecting and safely solving possible stalls during AXI bus transactions. The ASM leaves a quantified flexibility to the hardware accelerators when deviating from nominal conditions. The contribution is finally supported by a set of experiments on the Zynq-7000 and Zynq Ultrascale+SoCs by Xilinx.
Design and implementation of Performance Analysis Unit (PAU) for AXI-based multi-core System on Chip (SOC) With the rapid development of semiconductor technology, more complicated systems have been integrated into single chips. However, system performance is not increased in proportion to the gate-count of the system. This is mainly because the optimized design of the system becomes more difficult as the systems become more complicated. Therefore, it is essential to understand the internal behavior of the system and utilize the system resources effectively in the System on Chip (SOC) design. In this paper, we design a Performance Analysis Unit (PAU) for monitoring the AMBA Advanced eXtensible Interface (AXI) bus as a mechanism to investigate the internal and dynamic behavior of an SOC, especially for internal bus activities. A case study with the PAU for an H.264 decoder application is also presented to show how the PAU is utilized in SOC platform. The PAU has the capability to measure major system performance metrics, such as bus latency, amount of bus traffic, contention between master/slave devices, and bus utilization for specific durations. This paper also presents a distributor and synchronization method to connect multiple PAUs to monitor multiple internal buses of large SOC.
Aker: A Design and Verification Framework for Safe and Secure SoC Access Control Modern systems on a chip (SoCs) utilize heterogeneous architectures where multiple IP cores have concurrent access to on-chip shared resources. In security-critical applications, IP cores have different privilege levels for accessing shared resources, which must be regulated by an access control system. Aker is a design and verification framework for SoC access control. Aker builds upon the Access...
Tapestry: a resilient global-scale overlay for service deployment We present Tapestry, a peer-to-peer overlay routing infrastructure offering efficient, scalable, location-independent routing of messages directly to nearby copies of an object or service using only localized resources. Tapestry supports a generic decentralized object location and routing applications programming interface using a self-repairing, soft-state-based routing layer. The paper presents the Tapestry architecture, algorithms, and implementation. It explores the behavior of a Tapestry deployment on PlanetLab, a global testbed of approximately 100 machines. Experimental results show that Tapestry exhibits stable behavior and performance as an overlay, despite the instability of the underlying network layers. Several widely distributed applications have been implemented on Tapestry, illustrating its utility as a deployment infrastructure.
A Low-Power Fast-Transient 90-nm Low-Dropout Regulator With Multiple Small-Gain Stages A power-efficient 90-nm low-dropout regulator (LDO) with multiple small-gain stages is proposed in this paper. The proposed channel-resistance-insensitive small-gain stages provide loop gain enhancements without introducing low-frequency poles before the unity-gain frequency (UGF). As a result, both the loop gain and bandwidth of the LDO are improved, so that the accuracy and response speed of voltage regulation are significantly enhanced. As no on-chip compensation capacitor is required, the active chip area of the LDO is only 72.5 μm × 37.8 μm. Experimental results show that the LDO is capable of providing an output of 0.9 V with maximum output current of 50 mA from a 1-V supply. The LDO has a quiescent current of 9.3 μA, and has significantly improvement in line and load transient responses as well as performance in power-supply rejection ratio (PSRR).
Energy-Efficient Communication Protocol for Wireless Microsensor Networks Wireless distributed micro-sensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks.Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multihop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster base stations (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic net-works, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show that LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional routing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.
Type-2 Fuzzy Sets and Systems: An Overview [corrected reprint] As originally published in the February 2007 issue of IEEE Computational Intelligence Magazine, the above titled paper (ibid., vol. 2, no. 1, pp. 20-29, Feb 07) contained errors in mathematics that were introduced by the publisher. The corrected version is reprinted in its entirety.
Linear Amplification with Nonlinear Components A technique for producing bandpass linear amplification with nonlinear components (LINC) is described. The bandpass signal first is separated into two constant envelope component signals. All of the amplitude and phase information of the original bandpass signal is contained in phase modulation on the component signals. These constant envelope signals can be amplified or translated in frequency by amplifiers or mixers which have nonlinear input-output amplitude transfer characteristics. Passive linear combining of the amplified and/or translated component signals produces an amplified and/or translated replica of the original signal.
A MIMO decoder accelerator for next generation wireless communications In this paper, we present a multi-input-multi-output (MIMO) decoder accelerator architecture that offers versatility and reprogrammability while maintaining a very high performance-cost metric. The accelerator is meant to address the MIMO decoding bottlenecks associated with the convergence of multiple high-speed wireless standards onto a single device. It is scalable in the number of antennas, bandwidth, modulation format, and most importantly, present and emerging decoder algorithms. It features a Harvard-like architecture with complex vector operands and a deeply pipelined fixed-point complex arithmetic processing unit. When implemented on a Xilinx Virtex-4 LX200FF1513 field-programmable gate array (FPGA), the design occupied 43% of overall FPGA resources. The accelerator shows an advantage of up to three orders of magnitude (1000 times) in power-delay product for typical MIMO decoding operations relative to a general purpose DSP. When compared to dedicated application-specific IC (ASIC) implementations of mmse MIMO decoders, the accelerator showed a degradation of 340%-17%, depending on the actual ASIC being considered. In order to optimize the design for both speed and area, specific challenges had to be overcome. These include: definition of the processing units and their interconnection; proper dynamic scaling of the signal; and memory partitioning and parallelism.
3.4 A 36Gb/s PAM4 transmitter using an 8b 18GS/S DAC in 28nm CMOS At data rates beyond 10Gb/s, most wireline links employ NRZ signaling. Serial NRZ links as high as 56Gb/s and 60Gb/s have been reported [1]. Nevertheless, as the rate increases, the constraints imposed by the channel, package, and die become more severe and do not benefit from process scaling in the same fashion that circuit design does. Reflections from impedance discontinuities in the PCB and package caused by vias and connectors introduce significant signal loss and distortions at higher frequencies. Even with an ideal channel, at every package-die interface, there is an intrinsic parasitic capacitance due to the pads and the ESD circuit amounting to at least 150fF, and a 50Ω resistor termination at both the transmit and receive ends resulting in an intrinsic pole at 23GHz or lower. In light of all these limitations, serial NRZ signaling beyond 60Gb/s appears suboptimal in terms of both power and performance. Utilizing various modulation techniques such as PAM4, one can achieve a higher spectral efficiency [2]. To enable such transmission formats, high-speed moderate-resolution data converters are required. This paper describes a 36Gb/s transmitter based on an 18GS/s 8b DAC implemented in 28nm CMOS, compliant to the new IEEE802.3bj standard for 100G Ethernet over backplane and copper cables [3].
A Hybrid 1<sup>st</sup>/2<sup>nd</sup>-Order VCO-Based CTDSM With Rail-to-Rail Artifact Tolerance for Bidirectional Neural Interface Bi-directional brain machine interfaces enable simultaneous brain activity monitoring and neural modulation. However stimulation artifact can saturate instrumentation front-end, while concurrent on-site recording is needed. This brief presents a voltage controlled-oscillator (VCO) based continuous-time <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\rm \Delta \Sigma $ </tex-math></inline-formula> modulator (CTDSM) with rail-to-rail input range and fast artifact tracking. A hybrid <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}/2^{nd}$ </tex-math></inline-formula> -order loop is designed to achieve high dynamic range (DR) and large input range. Stimulation artifact is detected by a phase counter and compensated by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}$ </tex-math></inline-formula> -order loop. The residue signal is digitized by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2^{nd}$ </tex-math></inline-formula> -order loop for high precision. Redundancy between the two loops is implemented as feedback capacitors elements with non-binary ratio to guarantee feedback stability and linearity. Fabricated in a 55-nm CMOS process, the prototype achieves 65.7dB SNDR across 10 kHz bandwidth with a full scale of 200 mV <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">pp</sub> . And a ±1.2 V input range is achieved to suppress artifacts. Saline based experiment with simultaneous stimulation and recording demonstrates that the implemented system can track and tolerate rail-to-rail stimulation artifact within 30 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{s}$ </tex-math></inline-formula> while small neural signal can be continuously monitored.
1.2
0.2
0.2
0.2
0.1
0.066667
0
0
0
0
0
0
0
0
A Commutated-<italic>LC</italic> RF Broadband Delay Circuit This article presents a commutated-inductor–capacitor (commutated- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$LC$ </tex-math></inline-formula> ) or switched- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$LC$ </tex-math></inline-formula> circuit that acts as a radio frequency (RF) delay line. Thanks to its linear-periodically time-varying (LPTV) operation and fully passive implementation, it concurrently achieves long maximum delays, fine delay tuning steps, and wide instantaneous bandwidths while being low loss and highly linear. Unlike existing LPTV switched-capacitor broadband delays, the introduction of inductors in the proposed commutated- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$LC$ </tex-math></inline-formula> delay circuit provides a new degree of freedom, allowing it to operate at a much higher RF with a wider instantaneous bandwidth. A proof-of-concept prototype in a 65-nm CMOS process demonstrates a measured 1.3-GHz 3-dB bandwidth around a 4.3-GHz RF, i.e., a 30% fractional bandwidth, when clocked at 250 MHz. The measured maximum delay is 1.4 ns with a 23-dB loss and noise figure; this loss or noise is orders of magnitude lower compared with fully passive linear-time-invariant RF delay lines operating at a similar frequency with the same delay. The measured IIP3 is +16 dBm.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Linear Amplification with Nonlinear Components A technique for producing bandpass linear amplification with nonlinear components (LINC) is described. The bandpass signal first is separated into two constant envelope component signals. All of the amplitude and phase information of the original bandpass signal is contained in phase modulation on the component signals. These constant envelope signals can be amplified or translated in frequency by amplifiers or mixers which have nonlinear input-output amplitude transfer characteristics. Passive linear combining of the amplified and/or translated component signals produces an amplified and/or translated replica of the original signal.
Bandwidth Limitation for the Constant Envelope Components of an OFDM Signal in a LINC Architecture The linear amplification using non-linear components (LINC) power amplifier architecture uses saturated amplifiers to achieve high efficiency and linearity. However, the non-linear operations for LINC separation lead to bandwidth expansion. The expanded bandwidth causes problems for the analog circuitry and effectively places a limit on the modulation bandwidth of the transmitted signal. We show that for multi-carrier signals, it is the phase component of the transmitted modulation that dominates the spectrum of the LINC components. A novel bandwidth reduction scheme is then proposed for these components. The scheme repetitively limits the bandwidth of the LINC components while inhibiting envelope variations. Measurements show a 46% bandwidth reduction while still meeting the WLAN spectrum mask. This implies an increase of 85% in the modulation bandwidth of the transmitted signal.
SFDR-bandwidth limitations for high speed high resolution current steering CMOS D/A converters Although very high update rates are achieved in recent publications on high resolution D/A converters, the bottleneck in the design is to achieve a high spurious free output signal bandwidth. The influence of the dynamic output impedance on the chip performance has been analyzed and has been identified as an important limitation for the spurious free dynamic range (SFDR) of high resolution DAC's. Based on the presented analysis an optimized topology is proposed
Analysis of a 5.5-V Class-D Stage Used in 30-dBm Outphasing RF PAs in 130- and 65-nm CMOS This brief presents the design and analysis of a 5.5-V class-D stage used in two fully integrated watt-level, +32.0 and + 29.7 dBm, outphasing RF power amplifiers (PAs) in standard 130- and 65-nm CMOS technologies. The class-D stage utilizes a cascode configuration, driven by an ac-coupled low-voltage driver, to allow a 5.5-V supply in the 1.2-/2.5-V technologies without excessive device voltage stress. The rms electric fields (E) across the gate oxides and the optimal bias point, where the voltage stress is equally divided between the transistors, are computed. At the optimal bias point, the rms E, the power dissipation of the parasitic drain capacitance of the common-source transistors, and the equivalent on-resistances are reduced by approximately 25%, 50%, and 25%, compared to a conventional cascode (inverter) stage. To the authors' best knowledge, the class-D PAs presented are among the first fully integrated CMOS outphasing PAs reaching +30 dBm and demonstrate state-of-the-art output power and bandwidth.
Digital Interpolating Phase Modulator Implementation for Outphasing PA Recent wireless architectures have increased requirements in terms of power consumption and linearity. These requirements concern specially the power amplifier (PA) responsible for driving the antenna. One architecture proposed for this role is the outphasing amplifier, which realizes linear amplification using non-linear components. To generate the waveforms required by the switched-mode outphasing architecture, the Digital Interpolating Phase Modulator (DIPM) was proposed recently. It works by reconstructing a constant amplitude RF signal using several DSP engines. However, the authors only describe an implementation with four signal processing engines by hypothesis. In this paper, an implementation of the DIPM algorithm is developed, alongside an exploration on the number of signal processing engines used. Furthermore, the digital design is synthesized in two different process technologies to contrast power and area utilization. Our simulations confirm their choice of number of engines if power is a constraint. However, other requirements, such as technology used, may lead to different solutions.
Digital Signal Processing in Radio Receivers and Transmitters The interface between analog and digital signalprocessing paths in radio receivers and transmitters issteadily migrating toward the antenna as engineers learnto combine the unique attributes and capabilities of DSP with those of traditional communicationsystem designs to achieve systems with superior andbroadened capabilities while reducing system cost.Digital signal processing (DSP) techniques are rapidly being applied to many signal conditioning andsignal processing tasks traditionally performed byanalog components and subsystems in RF communicationreceivers and transmitters [1-4]. The incentive toreplace analog implementations of signal processingfunctions with DSP-based processing includes reducedcost, enhanced performance, improved reliability, easeof manufacturing and maintenance, and operatingflexibility and configurability [5]. Technologies thatfacilitate cost-effective DSP-based implementationsinclude a very large market base supportinghigh-performance programmable signal processing chips[6], field programmable gate arrays (FPGA),application-specific integrated circuits (ASICs), andhigh-performance analog-to-digital and digital-to-analogconverters (ADC and DAC respectively) [7]. The optimumpoint for inserting DSP in a signal processing chainis determined by matching the system performancerequirements to bandwidth and signal-to-noise ratio(i.e., speed and precision) limitations of the signal processors and the signal converters. In thispaper we review how clever algorithmic structuresinteract with DSP hardware to extend the range andperformance of DSP-based processing in RF transmitters and receivers.
Linearized Dual-Band Power Amplifiers With Integrated Baluns in 65 nm CMOS for a 2 2 802.11n MIMO WLAN SoC Fully integrated dual-band power amplifiers with on-chip baluns for 802.11n MIMO WLAN applications are presented. With a 3.3 V supply, the PAs produce a saturated output power of 28.3 dBm and 26.7 dBm with peak drain efficiency of 35.3% and 25.3% for the 2.4 GHz and 5 GHz bands, respectively. By utilizing multiple fully self-contained linearization algorithms, an EVM of -25 dB is achieved at 22.4 dBm for the 2.4 GHz band and 20.5 dBm for the 5 GHz band while transmitting 54 Mbs OFDM. The chip is fabricated in standard 65 nm CMOS and the PAs occupy 0.31 mm2 (2.4 GHz) and 0.27 mm2 (5 GHz) area. To examine the reliability of the PAs, accelerated aging tests are performed for several hundreds parts without a single failure.
A capacitance-compensation technique for improved linearity in CMOS class-AB power amplifiers A nonlinear capacitance-compensation technique is developed to help improve the linearity of CMOS class-AB power amplifiers. The method involves placing a PMOS device alongside the NMOS device that works as the amplifying unit, such that the overall capacitance seen at the amplifier input is a constant, thus improving linearity. The technique is developed with the help of computer simulations and Volterra analysis. A prototype two-stage amplifier employing the scheme is fabricated using a 0.5-μm CMOS process, and the measurements show that an improvement of approximately 8 dB in both two-tone intermodulation distortion (IM3) and adjacent-channel leakage power (ACP1) is obtained for a wide range of output power. The linearized amplifier exhibits an ACP1 of -35 dBc at the designed output power of 24 dBm, with a power-added efficiency of 29% and a gain of 23.9 dB, demonstrating the potential utility of the design approach for 3GPP WCDMA applications.
Study of Subharmonically Injection-Locked PLLs A complete analysis on subharmonically injection-locked PLLs develops fundamental theory for subharmonic locking phenomenon. It explains the noise shaping phenomenon, locking range and behavior, PVT tolerance, and pseudo locking issue. All of the analyses are verified by real chip measurements. Two 20-GHz PLLs based on the proposed theory are designed and fabricated in 90-nm CMOS technology to dem...
Disk Paxos We present an algorithm, called Disk Paxos, for implementing a reliable distributed system with a network of processors and disks. Like the original Paxos algorithm, Disk Paxos maintains consistency in the presence of arbitrary non-Byzantine faults. Progress can be guaranteed as long as a majority of the disks are available, even if all processors but one have failed.
A Double-Tail Latch-Type Voltage Sense Amplifier with 18ps Setup+Hold Time.
Decision making for cognitive radio equipment: analysis of the first 10 years of exploration. This article draws a general retrospective view on the first 10 years of cognitive radio (CR). More specifically, we explore in this article decision making and learning for CR from an equipment perspective. Thus, this article depicts the main decision making problems addressed by the community as general dynamic configuration adaptation (DCA) problems and discuss the suggested solution proposed in the literature to tackle them. Within this framework dynamic spectrum management is briefly introduced as a specific instantiation of DCA problems. We identified, in our analysis study, three dimensions of constrains: the environment's, the equipment's and the user's related constrains. Moreover, we define and use the notion of a priori knowledge, to show that the tackled challenges by the radio community during first 10 years of CR to solve decision making problems have often the same design space, however they differ by the a priori knowledge they assume available. Consequently, we suggest in this article, the "a priori knowledge" as a classification criteria to discriminate the main proposed techniques in the literature to solve configuration adaptation decision making problems. We finally discuss the impact of sensing errors on the decision making process as a prospective analysis.
Robust compensation of a chattering time-varying input delay We investigate the design of a prediction-based controller for a linear system subject to a time-varying input delay, not necessarily causal. This means that the information feeding the system can be older than ones previously received. We propose to use the current delay value in the prediction employed in the control law. Modeling the input delay as a transport Partial Differential Equation, we prove asymptotic tracking of the system state, providing that the average ℒ2-norm of the delay time-derivative is sufficiently small. This result is obtained by generalizing Halanay inequality to time-varying differential inequalities.
Multi-Channel Neural Recording Implants: A Review. The recently growing progress in neuroscience research and relevant achievements, as well as advancements in the fabrication process, have increased the demand for neural interfacing systems. Brain-machine interfaces (BMIs) have been revealed to be a promising method for the diagnosis and treatment of neurological disorders and the restoration of sensory and motor function. Neural recording implants, as a part of BMI, are capable of capturing brain signals, and amplifying, digitizing, and transferring them outside of the body with a transmitter. The main challenges of designing such implants are minimizing power consumption and the silicon area. In this paper, multi-channel neural recording implants are surveyed. After presenting various neural-signal features, we investigate main available neural recording circuit and system architectures. The fundamental blocks of available architectures, such as neural amplifiers, analog to digital converters (ADCs) and compression blocks, are explored. We cover the various topologies of neural amplifiers, provide a comparison, and probe their design challenges. To achieve a relatively high SNR at the output of the neural amplifier, noise reduction techniques are discussed. Also, to transfer neural signals outside of the body, they are digitized using data converters, then in most cases, the data compression is applied to mitigate power consumption. We present the various dedicated ADC structures, as well as an overview of main data compression methods.
1.022042
0.022424
0.022424
0.022424
0.018182
0.009091
0.004021
0.00062
0
0
0
0
0
0
A New Buck Converter With Optimum-Damping and Dynamic-Slope Compensation Techniques. This paper presents a new buck converter with optimum-damping and dynamic-slope compensation techniques. The optimum-damping control is a well-known current control method, and the proposed dynamic-slope generator can achieve frequency compensation to prevent subharmonic oscillation. When the system is disturbed, the proposed dynamic-slope generator can stabilize the circuit in a single cycle. Hen...
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Design and test-bed implementation of a reconfigurable receiver for navigation applications The geographical location of a wireless device is acquiring a significant importance under the boost of innovative applications in the wireless market, and under the research activities related to the development of a new generation of Global Navigation Satellite Systems (GNSS). In the US a new phase of the Global Positioning System is foreseen, and Europe has decided to proceed with the Galileo program. In addition, different government institutions worldwide are working toward the definition of navigation plans, and regulations will set as mandatory the location identification of mobile users in emergency situations (US E-911 law and the European. E-112 directive). Considering all the navigation/positioning systems and techniques that will be available in the near future, it is clear that a central role will be played by the user handset; in fact, if navigation signals from different sources will be available, the unique possibility to obtain the best navigation performance from the user perspective will be the employment of enhanced receivers able to fuse different data. With this aim, this paper presents the design of an innovative navigation receiver based on Software Defined Radio (SDR) technology. The reconfigurability and flexibility issues are guaranteed by the use of reprogrammable high-speed hardware (FPGA-Field Programmable Gate Array and DSP-Digital Signal Processor) and thus a deep integration at the raw signal level between GPS and the future Galileo has been obtained. The interoperability issue among different systems is then solved at the receiver level. This paper discusses the implementation of a baseline architecture of a reconfigurable GNSS receiver over a FPGA + DSP prototype platform. The receiver is able to deal with different input Signals-In-Space (SIS), and can operate with present GPS ranging signals or with the future Galileo SIS. Copyright (C) 2002 John Wiley Sons, Ltd.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Proposal and evaluation of system diversity for software defined radio This paper proposes the concept of system diversity on software defined radio (SDR) and investigates the effectiveness of system diversity by using a concrete simulation model. System diversity allows the wireless communication system to be dynamically changed in addition to the signal processing algorithm or modulation/coding scheme being used. To clarify the validity of system diversity, we examine a system simulation model consisting of three wireless communication systems; algorithms are introduced to show how system diversity can be controlled using the QoS parameters of the received signal level, the data transmission rate and the channel capacity. The process by which system diversity switching is triggered is elucidated and a practical example is introduced. Simulation results confirm that system diversity offers higher performance in terms of data throughput and system channel capacity than existing wireless communication systems
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A 240-MHz Low-Pass Filter With Variable Gain in 65-nm CMOS for a UWB Radio Receiver An integrated fifth-order continuous-time low-pass filter for a WiMedia ultrawideband radio receiver is described in this paper. The prototype filter is realized with a passive pole at the filter input and a fourth-order leapfrog filter in which the gm-C technique with pseudodifferential transconductors is used. The transconductors do not include internal nodes, and they are designed to have a nominal 26-dB dc gain, of which process, voltage, and temperature variations are controlled by means of a negative resistance circuit. The losses of the low-dc-gain filter integrators are already taken into account in the filter synthesis. The passband edge frequency of the implemented filter is 240 MHz in order to receive multiband-orthogonal-frequency-division-multiplexing signals using the direct-conversion topology. The voltage gain of the filter can be controlled from 9 to 43 dB in the 1-dB gain steps. The filter achieves a 7.8-nVradicHz input-referred noise density, a -8-dBV out-of-band third-order intermodulation intercept point, and a +15-dBV out-of-band second-order intermodulation intercept point. The circuit uses a 1.2-V supply and has been fabricated in a modern 65-nm CMOS technology.
A 65nm CMOS 1-to-10GHz tunable continuous-time low-pass filter for high-data-rate communications In this work the Gm-C topology is adopted for its merits at high frequencies. In this technique, two critical parameters should be accounted for: the accuracy of the Q factors of the pole pairs (for correct transfer function) and parasitic capacitances (for maximal cut-off frequency). The former point is influenced by the phase shift of the integrators compounding the filter in the neighborhood of the filter's edge frequency. This phase error is due to two antagonistic effects which are the integrator's finite DC gain and its high-frequency poles/zeros.
A 72 dB-DR 465 MHz-BW Continuous-Time 1-2 MASH ADC in 28 nm CMOS. This paper presents a continuous-time (CT) multi-stage noise-shaping (MASH) ADC in 28 nm CMOS. The MASH ADC uses a first-order front-end stage to digitize the input signal and a second-order back-end stage to digitize the quantization noise of the coarse flash ADC inside the front-end. An RC lattice filter and a current-steering DAC are utilized to extract the front-end coarse quantization residue...
Design and Analysis of Enhanced Mixer-First Receivers Achieving 40-dB/decade RF Selectivity A “second-order” passive mixer-first receiver is proposed to improve channel selectivity, linearity, and noise figure (NF) in the presence of out-of-band blockers, by presenting an impedance that rolls off at 40 dB/decade as the load to an N-path filter. The synthesis of this impedance is described in a step-by-step manner starting from the required impedance transfer function to its actual circui...
A 65 nm CMOS Quad-Band SAW-Less Receiver SoC for GSM/GPRS/EDGE. A quad-band 2.5G receiver is designed to replace the front-end SAW filters with on-chip bandpass filters and to integrate the LNA matching components, as well as the RF baluns. The receiver achieves a typical sensitivity of -110 dBm or better, while saving a considerable amount of BOM. Utilizing an arrangement of four baseband capacitors and MOS switches driven by 4-phase 25% duty-cycle clocks, high-Q BPF's are realized to attenuate the 0 dBm out-of-band blocker. The 65 nm CMOS SAW-less receiver integrated as a part of a 2.5G SoC, draws 55 mA from the battery, and measures an out-of-band 1 dB-compression of greater than +2 dBm. Measured as a stand-alone, as well as the baseband running in call mode in the platform level, the receiver passes the 3GPP specifications with margin.
Analysis and Optimization of Current-Driven Passive Mixers in Narrowband Direct-Conversion Receivers Properties of the current-driven passive mixer are explored to maximize its performance in a zero-IF receiver. Since there is no reverse isolation between the RF and baseband sides of the mixer, the mixer reflects the baseband impedance to the RF and vice versa through simple frequency shifting. It is also shown that in an IQ down-conversion system the lack of reverse isolation causes a mutual interaction between the two quadrature mixers, which results in different high-and low-side conversion gains, and unexpected IIP2 and IIP3 values. With a thorough and accurate mathematical analysis it is shown how to design this mixer and its current buffer, and how to size components to get the best linearity, conversion gain and noise figure while alleviating the IQ cross-talk problem.
Max-Min D-Cluster Formation in Wireless Ad Hoc Networks An ad hoc network may be logically represented as a set of clusters. The clusterheads form a -hop dominating set. Each node is at most hops from a clusterhead. Clusterheads form a virtual backbone and may be used to route packets for nodes in their cluster. Previous heuristics restricted themselves to -hop clusters. We show that the minimum -hop dominating set problem is NP-complete. Then we present a heuristic to form -clusters in a wireless ad hoc network. Nodes are assumed to have non-deterministic mobility pattern. Clusters are formed by diffusing node identities along the wireless links. When the heuristic terminates, a node either becomes a clusterhead, or is at most wireless hops away from its clusterhead. The value of is a parameter of the heuristic. The heuristic can be run either at regular intervals, or whenever the network configura- tion changes. One of the features of the heuristic is that it t ends to re-elect existing clusterheads even when the network configuration c hanges. This helps to reduce the communication overheads during transition from old clusterheads to new clusterheads. Also, there is a tendency to evenly dis- tribute the mobile nodes among the clusterheads, and evently distribute the responsibility of acting as clusterheads among all nodes. Thus, the heuristic is fair and stable. Simulation experiments demonstrate that the proposed heuristic is better than the two earlier heuristics, namely the LCA (1) and Degree based (11) solutions.
Measurement issues in galvanic intrabody communication: influence of experimental setup Significance: The need for increasingly energyefficient and miniaturized bio-devices for ubiquitous health monitoring has paved the way for considerable advances in the investigation of techniques such as intrabody communication (IBC), which uses human tissues as a transmission medium. However, IBC still poses technical challenges regarding the measurement of the actual gain through the human body. The heterogeneity of experimental setups and conditions used together with the inherent uncertainty caused by the human body make the measurement process even more difficult. Goal: The objective of this work, focused on galvanic coupling IBC, is to study the influence of different measurement equipments and conditions on the IBC channel. Methods: different experimental setups have been proposed in order to analyze key issues such as grounding, load resistance, type of measurement device and effect of cables. In order to avoid the uncertainty caused by the human body, an IBC electric circuit phantom mimicking both human bioimpedance and gain has been designed. Given the low-frequency operation of galvanic coupling, a frequency range between 10 kHz and 1 MHz has been selected. Results: the correspondence between simulated and experimental results obtained with the electric phantom have allowed us to discriminate the effects caused by the measurement equipment. Conclusion: this study has helped us obtain useful considerations about optimal setups for galvanic-type IBC as well as to identify some of the main causes of discrepancy in the literature.
On the minimal synchronism needed for distributed consensus Reaching agreement is a primitive of distributed computing. While this poses no problem in an ideal, failure-free environment, it imposes certain constraints on the capabilities of an actual system: a system is viable only if it permits the existence of consensus protocols tolerant to some number of failures. Fischer, Lynch and Paterson [FLP] have shown that in a completely asynchronous model, even one failure cannot be tolerated. In this paper we extend their work, identifying several critical system parameters, including various synchronicity conditions, and examine how varying these affects the number of faults which can be tolerated. Our proofs expose general heuristic principles that explain why consensus is possible in certain models but not possible in others.
Type-2 Fuzzy Sets and Systems: An Overview [corrected reprint] As originally published in the February 2007 issue of IEEE Computational Intelligence Magazine, the above titled paper (ibid., vol. 2, no. 1, pp. 20-29, Feb 07) contained errors in mathematics that were introduced by the publisher. The corrected version is reprinted in its entirety.
A survey of state and disturbance observers for practitioners This paper gives a unified and historical review of observer design for the benefit of practitioners. It is unified in the sense that all observers are examined in terms of: 1) the assumed dynamic structure of the plant; 2) the required information, including the input signals and modeling information of the plant; and 3) the implementation equation of the observer. This allows a practitioner, with a particular observer design problem in mind, to quickly find a suitable solution. The review is historical in the sense that it follows the evolution of ideas in observer design in the last half century. From the distinction in problem formulation, required modeling information and the observer design goal, we can see two schools of thought: one is developed in the framework of modern control theory; the other is based on disturbance estimation, which has been, to some extent, overlooked
A MIMO decoder accelerator for next generation wireless communications In this paper, we present a multi-input-multi-output (MIMO) decoder accelerator architecture that offers versatility and reprogrammability while maintaining a very high performance-cost metric. The accelerator is meant to address the MIMO decoding bottlenecks associated with the convergence of multiple high-speed wireless standards onto a single device. It is scalable in the number of antennas, bandwidth, modulation format, and most importantly, present and emerging decoder algorithms. It features a Harvard-like architecture with complex vector operands and a deeply pipelined fixed-point complex arithmetic processing unit. When implemented on a Xilinx Virtex-4 LX200FF1513 field-programmable gate array (FPGA), the design occupied 43% of overall FPGA resources. The accelerator shows an advantage of up to three orders of magnitude (1000 times) in power-delay product for typical MIMO decoding operations relative to a general purpose DSP. When compared to dedicated application-specific IC (ASIC) implementations of mmse MIMO decoders, the accelerator showed a degradation of 340%-17%, depending on the actual ASIC being considered. In order to optimize the design for both speed and area, specific challenges had to be overcome. These include: definition of the processing units and their interconnection; proper dynamic scaling of the signal; and memory partitioning and parallelism.
A 12.8 GS/s Time-Interleaved ADC With 25 GHz Effective Resolution Bandwidth and 4.6 ENOB This paper presents a 12.8 GS/s 32-way hierarchically time-interleaved SAR ADC with 4.6 ENOB in 65 nm CMOS. The prototype utilizes hierarchical sampling and cascode sampler circuits to enable greater than 25 GHz 3 dB effective resolution bandwidth (ERBW). We further employ a pseudo-differential SAR ADC to save power and area. The core circuit occupies only 0.23 mm 2 and consumes a total of 162 mW from dual 1.2 V/1.1 V supplies. The design achieves a SNDR of 29.4 dB at low frequencies and 26.4 dB at 25 GHz, resulting in a figure-of-merit of 0.79 pJ/conversion-step. As will be further described in the paper, the circuit architecture used in this prototype enables expansion to 25.6 GS/s or 51.2 GS/s via additional interleaving without significantly impacting ERBW.
Multi-Channel Neural Recording Implants: A Review. The recently growing progress in neuroscience research and relevant achievements, as well as advancements in the fabrication process, have increased the demand for neural interfacing systems. Brain-machine interfaces (BMIs) have been revealed to be a promising method for the diagnosis and treatment of neurological disorders and the restoration of sensory and motor function. Neural recording implants, as a part of BMI, are capable of capturing brain signals, and amplifying, digitizing, and transferring them outside of the body with a transmitter. The main challenges of designing such implants are minimizing power consumption and the silicon area. In this paper, multi-channel neural recording implants are surveyed. After presenting various neural-signal features, we investigate main available neural recording circuit and system architectures. The fundamental blocks of available architectures, such as neural amplifiers, analog to digital converters (ADCs) and compression blocks, are explored. We cover the various topologies of neural amplifiers, provide a comparison, and probe their design challenges. To achieve a relatively high SNR at the output of the neural amplifier, noise reduction techniques are discussed. Also, to transfer neural signals outside of the body, they are digitized using data converters, then in most cases, the data compression is applied to mitigate power consumption. We present the various dedicated ADC structures, as well as an overview of main data compression methods.
1.11
0.1
0.1
0.1
0.025
0.000333
0
0
0
0
0
0
0
0
A spiking neuromorphic design with resistive crossbar Neuromorphic systems recently gained increasing attention for their high computation efficiency. Many designs have been proposed and realized with traditional CMOS technology or emerging devices. In this work, we proposed a spiking neuromorphic design built on resistive crossbar structures and implemented with IBM 130nm technology. Our design adopts a rate coding scheme where pre- and post-neuron signals are represented by digitalized pulses. The weighting function of pre-neuron signals is executed on the resistive crossbar in analog format. The computing result is transferred into digitalized output spikes via an integrate-and-fire circuit (IFC) as the post-neuron. We calibrated the computation accuracy of the entire system through circuit simulations. The results demonstrated a good match to our analytic modeling. Furthermore, we implemented both feedforward and Hopfield networks by utilizing the proposed neuromorphic design. The system performance and robustness were studied through massive Monte-Carlo simulations based on the application of digital image recognition. Comparing to the previous crossbar-based computing engine that represents data with voltage amplitude, our design can achieve >50% energy savings, while the average probability of failed recognition increase only 1.46% and 5.99% in the feedforward and Hopfield implementations, respectively.
HCP: A Flexible CNN Framework for Multi-label Image Classification. Convolutional Neural Network (CNN) has demonstrated promising performance in single-label image classification tasks. However, how CNN best copes with multi-label images still remains an open problem, mainly due to the complex underlying object layouts and insufficient multi-label training images. In this work, we propose a flexible deep CNN infrastructure, called Hypotheses-CNN-Pooling (HCP), whe...
On-Chip Memory Technology Design Space Explorations for Mobile Deep Neural Network Accelerators Deep neural network (DNN) inference tasks have become ubiquitous workloads on mobile SoCs and demand energy-efficient hardware accelerators. Mobile DNN accelerators are heavily area-constrained, with only minimal on-chip SRAM, which results in heavy use of inefficient off-chip DRAM. With diminishing returns from conventional silicon technology scaling, emerging memory technologies that offer better area density than SRAM can boost accelerator efficiency by minimizing costly off-chip DRAM accesses. This paper presents a detailed design space exploration (DSE) of technology-system co-design for systolic-array accelerators. We focus on practical/mature on-chip memory technologies, including SRAM, eDRAM, MRAM, and 3D vertical RRAM (VRRAM). The DSE employs state-of-the-art optimizations (e.g., model compression and optimized buffer scheduling), and evaluates results on important models including ResNet-50, MobileNet, and Faster-RCNN. Compared to an SRAM/DRAM baseline, MRAM-based accelerators show up to 4.68× energy benefits (57% area overhead), while a 3D VRRAM-based design achieves 2.22× energy benefits (33% area reduction).
A 7-nm Compute-in-Memory SRAM Macro Supporting Multi-Bit Input, Weight and Output and Achieving 351 TOPS/W and 372.4 GOPS In this work, we present a compute-in-memory (CIM) macro built around a standard two-port compiler macro using foundry 8T bit-cell in 7-nm FinFET technology. The proposed design supports 1024 4 b <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\times $ </tex-math></inline-formula> 4 b multiply-and-accumulate (MAC) computations simultaneously. The 4-bit input is represented by the number of read word-line (RWL) pulses, while the 4-bit weight is realized by charge sharing among binary-weighted computation caps. Each unit of computation cap is formed by the inherent cap of the sense amplifier (SA) inside the 4-bit Flash ADC, which saves area and minimizes kick-back effect. Access time is 5.5 ns with 0.8-V power supply at room temperature. The proposed design achieves energy efficiency of 351 TOPS/W and throughput of 372.4 GOPS. Implications of our design from neural network implementation and accuracy perspectives are also discussed.
RRAM for Compute-in-Memory: From Inference to Training To efficiently deploy machine learning applications to the edge, compute-in-memory (CIM) based hardware accelerator is a promising solution with improved throughput and energy efficiency. Instant-on inference is further enabled by emerging non-volatile memory technologies such as resistive random access memory (RRAM). This paper reviews the recent progresses of the RRAM based CIM accelerator desig...
Challenges and Trends of SRAM-Based Computing-In-Memory for AI Edge Devices When applied to artificial intelligence edge devices, the conventionally von Neumann computing architecture imposes numerous challenges (e.g., improving the energy efficiency), due to the memory-wall bottleneck involving the frequent movement of data between the memory and the processing elements (PE). Computing-in-memory (CIM) is a promising candidate approach to breaking through this so-called m...
Cache operations by MRU change The performance of set associative caches is analyzed. The method used is to group the cache lines into regions according to their positions in the replacement stacks of a cache, and then to observe how the memory access of a CPU is distributed over these regions. Results from the preserved CPU traces show that the memory accesses are heavily concentrated on the most recently used (MRU) region in the cache. The concept of MRU change is introduced; the idea is to use the event that the CPU accesses a non-MRU line to approximate the time the CPU is changing its working set. The concept is shown to be useful in many aspects of cache design and performance evaluation, such as comparison of various replacement algorithms, improvement of prefetch algorithms, and speedup of cache simulation.
Broadband MIMO-OFDM Wireless Communications Orthogonal frequency division multiplexing (OFDM) is a popular method for high data rate wireless transmission. OFDM may be combined with antenna arrays at the transmitter and receiver to increase the diversity gain and/or to enhance the system capacity on time-varying and frequency-selective channels, resulting in a multiple-input multiple-output (MIMO) configuration. The paper explores various p...
On the minimal synchronism needed for distributed consensus Reaching agreement is a primitive of distributed computing. While this poses no problem in an ideal, failure-free environment, it imposes certain constraints on the capabilities of an actual system: a system is viable only if it permits the existence of consensus protocols tolerant to some number of failures. Fischer, Lynch and Paterson [FLP] have shown that in a completely asynchronous model, even one failure cannot be tolerated. In this paper we extend their work, identifying several critical system parameters, including various synchronicity conditions, and examine how varying these affects the number of faults which can be tolerated. Our proofs expose general heuristic principles that explain why consensus is possible in certain models but not possible in others.
Type-2 fuzzy sets and systems: an overview This paper provides an introduction to and an overview of type-2 fuzzy sets (T2 FS) and systems. It does this by answering the following questions: What is a T2 FS and how is it different from a T1 FS? Is there new terminology for a T2 FS? Are there important representations of a T2 FS and, if so, why are they important? How and why are T2 FSs used in a rule-based system? What are the detailed computations for an interval T2 fuzzy logic system (IT2 FLS) and are they easy to understand? Is it possible to have an IT2 FLS without type reduction? How do we wrap this up and where can we go to learn more?
RockSalt: better, faster, stronger SFI for the x86 Software-based fault isolation (SFI), as used in Google's Native Client (NaCl), relies upon a conceptually simple machine-code analysis to enforce a security policy. But for complicated architectures such as the x86, it is all too easy to get the details of the analysis wrong. We have built a new checker that is smaller, faster, and has a much reduced trusted computing base when compared to Google's original analysis. The key to our approach is automatically generating the bulk of the analysis from a declarative description which we relate to a formal model of a subset of the x86 instruction set architecture. The x86 model, developed in Coq, is of independent interest and should be usable for a wide range of machine-level verification tasks.
A Dht-Based Discovery Service For The Internet Of Things Current trends towards the Future Internet are envisaging the conception of novel services endowed with context-aware and autonomic capabilities to improve end users' quality of life. The Internet of Things paradigm is expected to contribute towards this ambitious vision by proposing models and mechanisms enabling the creation of networks of "smart things" on a large scale. It is widely recognized that efficient mechanisms for discovering available resources and capabilities are required to realize such vision. The contribution of this work consists in a novel discovery service for the Internet of Things. The proposed solution adopts a peer-to-peer approach for guaranteeing scalability, robustness, and easy maintenance of the overall system. While most existing peer-to-peer discovery services proposed for the IoT support solely exact match queries on a single attribute (i. e., the object identifier), our solution can handle multiattribute and range queries. We defined a layered approach by distinguishing three main aspects: multiattribute indexing, range query support, peer-to-peer routing. We chose to adopt an over-DHT indexing scheme to guarantee ease of design and implementation principles. We report on the implementation of a Proof of Concept in a dangerous goods monitoring scenario, and, finally, we discuss test results for structural properties and query performance evaluation.
Current-mode adaptively hysteretic control for buck converters with fast transient response and improved output regulation This paper presents a current-mode adaptively hysteretic control (CMAHC) technique to achieve the fast transient response for DC-DC buck converters. A complementary full range current sensor comprising of chargingpath and discharging-path sensing transistors is proposed to track the inductor current seamlessly. With the proposed current-mode adaptively hysteretic topology, the inductor current is continuously monitored, and the adaptively hysteretic threshold is dynamically adjusted according to the feedback information comes from the output voltage level. Therefore, a fast load-transient response can be achieved. Besides, the output regulation performance is also improved by the proposed dynamic current-scaling circuitry (DCSC). Moreover, the proposed CMAHC topology can be used in a nearly zero RESR design configuration. The prototype fabricated using TSMC 0.25μm CMOS process occupies the area of 1.78mm2 including all bonding pads. Experimental results show that the output voltage ripple is smaller than 30mV over a wide loading current from 0 mA to 500 mA with maximum power conversion efficiency higher than 90%. The recovery time from light to heavy load (100 to 500 mA) is smaller than 5μs.
Power Efficiency Comparison of Event-Driven and Fixed-Rate Signal Conversion and Compression for Biomedical Applications Energy-constrained biomedical recording systems need power-efficient data converters and good signal compression in order to meet the stringent power consumption requirements of many applications. In literature today, typically a SAR ADC in combination with digital compression is used. Recently, alternative event-driven sampling techniques have been proposed that incorporate compression in the ADC, such as level-crossing A/D conversion. This paper describes the power efficiency analysis of such level-crossing ADC (LCADC) and the traditional fixed-rate SAR ADC with simple compression. A model for the power consumption of the LCADC is derived, which is then compared to the power consumption of the SAR ADC with zero-order hold (ZOH) compression for multiple biosignals (ECG, EMG, EEG, and EAP). The LCADC is more power efficient than the SAR ADC up to a cross-over point in quantizer resolution (for example 8 bits for an EEG signal). This cross-over point decreases with the ratio of the maximum to average slope in the signal of the application. It also changes with the technology and design techniques used. The LCADC is thus suited for low to medium resolution applications. In addition, the event-driven operation of an LCADC results in fewer data to be transmitted in a system application. The event-driven LCADC without timer and with single-bit quantizer achieves a reduction in power consumption at system level of two orders of magnitude, an order of magnitude better than the SAR ADC with ZOH compression. At system level, the LCADC thus offers a big advantage over the SAR ADC.
1.2
0.2
0.2
0.1
0.066667
0.018182
0
0
0
0
0
0
0
0
Helix: Algorithm/Architecture Co-design for Accelerating Nanopore Genome Base-calling Nanopore genome sequencing is the key to enabling personalized medicine, global food security, and virus surveillance. The state-of-the-art base-callers adopt deep neural networks (DNNs) to translate electrical signals generated by nanopore sequencers to digital DNA symbols. A DNN-based base-caller consumes 44.5% of total execution time of a nanopore sequencing pipeline. However, it is difficult to quantize a base-caller and build a power-efficient processing-in-memory (PIM) to run the quantized base-caller. Although conventional network quantization techniques reduce the computing overhead of a base-caller by replacing floating-point multiply-accumulations by cheaper fixed-point operations, it significantly increases the number of systematic errors that cannot be corrected by read votes. The power density of prior nonvolatile memory (NVM)-based PIMs has already exceeded memory thermal tolerance even with active heat sinks, because their power efficiency is severely limited by analog-to-digital converters (ADC). Finally, Connectionist Temporal Classification (CTC) decoding and read voting cost 53.7% of total execution time in a quantized base-caller, and thus became its new bottleneck. In this paper, we propose a novel algorithm/architecture co-designed PIM, Helix, to power-efficiently and accurately accelerate nanopore base-calling. From algorithm perspective, we present systematic error aware training to minimize the number of systematic errors in a quantized base-caller. From architecture perspective, we propose a low-power SOT-MRAM-based ADC array to process analog-to-digital conversion operations and improve power efficiency of prior DNN PIMs. Moreover, we revised a traditional NVM-based dot-product engine to accelerate CTC decoding operations, and create a SOT-MRAM binary comparator array to process read voting. Compared to state-of-the-art PIMs, Helix improves base-calling throughput by 6x, throughput per Watt by 11.9x and per mm2 by 7.5x without degrading base-calling accuracy.
Winnowing: local algorithms for document fingerprinting Digital content is for copying: quotation, revision, plagiarism, and file sharing all create copies. Document fingerprinting is concerned with accurately identifying copying, including small partial copies, within large sets of documents.We introduce the class of local document fingerprinting algorithms, which seems to capture an essential property of any finger-printing technique guaranteed to detect copies. We prove a novel lower bound on the performance of any local algorithm. We also develop winnowing, an efficient local fingerprinting algorithm, and show that winnowing's performance is within 33% of the lower bound. Finally, we also give experimental results on Web data, and report experience with MOSS, a widely-used plagiarism detection service.
SWIFOLD: Smith-Waterman implementation on FPGA with OpenCL for long DNA sequences. The results suggest that SWIFOLD can be a serious contender for accelerating the SW alignment of DNA sequences of unrestricted size in an affordable way reaching on average 125 GCUPS and almost a peak of 270 GCUPS.
Accelerating Sequence Alignment to Graphs Aligning DNA sequences to an annotated reference is a key step for genotyping in biology. Recent scientific studies have demonstrated improved inference by aligning reads to a variation graph, i.e., a reference sequence augmented with known genetic variations. Given a variation graph in the form of a directed acyclic string graph, the sequence to graph alignment problem seeks to find the best matching path in the graph for an input query sequence. Solving this problem exactly using a sequential dynamic programming algorithm takes quadratic time in terms of the graph size and query length, making it difficult to scale to high throughput DNA sequencing data. In this work, we propose the first parallel algorithm for computing sequence to graph alignments that leverages multiple cores and single-instruction multiple-data (SIMD) operations. We take advantage of the available inter-task parallelism, and provide a novel blocked approach to compute the score matrix while ensuring high memory locality. Using a 48-core Intel Xeon Skylake processor, the proposed algorithm achieves peak performance of 317 billion cell updates per second (GCUPS), and demonstrates near linear weak and strong scaling on up to 48 cores. It delivers significant performance gains compared to existing algorithms, and results in run-time reduction from multiple days to three hours for the problem of optimally aligning high coverage long (PacBio/ONT) or short (Illumina) DNA reads to an MHC human variation graph containing 10 million vertices.
Efficient Architecture-Aware Acceleration of BWA-MEM for Multicore Systems Innovations in Next-Generation Sequencing are enabling generation of DNA sequence data at ever faster rates and at very low cost. For example, the Illumina NovaSeq 6000 sequencer can generate 6 Terabases of data in less than two days, sequencing nearly 20 Billion short DNA fragments called reads at the low cost of $1000 per human genome. Large sequencing centers typically employ hundreds of such systems. Such highthroughput and low-cost generation of data underscores the need for commensurate acceleration in downstream computational analysis of the sequencing data. A fundamental step in downstream analysis is mapping of the reads to a long reference DNA sequence, such as a reference human genome. Sequence mapping is a compute-intensive step that accounts for more than 30% of the overall time of the GATK (Genome Analysis ToolKit) best practices workflow. BWA-MEM is one of the most widely used tools for sequence mapping and has tens of thousands of users. In this work, we focus on accelerating BWA-MEM through an efficient architecture aware implementation, while maintaining identical output. The volume of data requires distributed computing and is usually processed on clusters or cloud deployments with multicore processors usually being the platform of choice. Since the application can be easily parallelized across multiple sockets (even across distributed memory systems) by simply distributing the reads equally, we focus on performance improvements on a single socket multicore processor. BWA-MEM run time is dominated by three kernels, collectively responsible for more than 85% of the overall compute time. We improved the performance of the three kernels by 1) using techniques to improve cache reuse, 2) simplifying the algorithms, 3) replacing many small memory allocations with a few large contiguous ones to improve hardware prefetching of data, 4) software prefetching of data, and 5) utilization of SIMD wherever applicable and massive reorganization of the source code to enable these improvements. As a result, we achieved nearly 2x, 183x, and 8x speedups on the three kernels, respectively, resulting in up to 3.5x and 2.4x speedups on end-to-end compute time over the original BWA-MEM on single thread and single socket of Intel Xeon Skylake processor. To the best of our knowledge, this is the highest reported speedup over BWA-MEM (running on a single CPU) while using a single CPU or a single CPU-single GPGPU/FPGA combination.
Algorithmic Improvement and GPU Acceleration of the GenASM Algorithm We improve on GenASM, a recent algorithm for genomic sequence alignment, by significantly reducing its memory footprint and bandwidth requirement. Our algorithmic improvements reduce the memory footprint by 24 × and the number of memory accesses by 12 ×. We efficiently parallelize the algorithm for GPUs, achieving a 4.1 × speedup over a CPU implementation of the same algorithm, a 62× speedup over minimap2's CPU-based KSW2 and a 7.2 × speedup over the CPU-based Edlib for long reads.
An FPGA Accelerator of the Wavefront Algorithm for Genomics Pairwise Alignment In the last years, advances in next-generation sequencing technologies have enabled the proliferation of genomic applications that guide personalized medicine. These applications have an enormous computational cost due to the large amount of genomic data they process. The first step in many of these applications consists in aligning reads against a reference genome. Very recently, the wavefront al...
PUMA: A Programmable Ultra-efficient Memristor-based Accelerator for Machine Learning Inference Memristor crossbars are circuits capable of performing analog matrix-vector multiplications, overcoming the fundamental energy efficiency limitations of digital logic. They have been shown to be effective in special-purpose accelerators for a limited set of neural network applications. We present the Programmable Ultra-efficient Memristor-based Accelerator (PUMA) which enhances memristor crossbars with general purpose execution units to enable the acceleration of a wide variety of Machine Learning (ML) inference workloads. PUMA's microarchitecture techniques exposed through a specialized Instruction Set Architecture (ISA) retain the efficiency of in-memory computing and analog circuitry, without compromising programmability. We also present the PUMA compiler which translates high-level code to PUMA ISA. The compiler partitions the computational graph and optimizes instruction scheduling and register allocation to generate code for large and complex workloads to run on thousands of spatial cores. We have developed a detailed architecture simulator that incorporates the functionality, timing, and power models of PUMA's components to evaluate performance and energy consumption. A PUMA accelerator running at 1 GHz can reach area and power efficiency of 577 GOPS/s/mm 2 and 837~GOPS/s/W, respectively. Our evaluation of diverse ML applications from image recognition, machine translation, and language modelling (5M-800M synapses) shows that PUMA achieves up to 2,446× energy and 66× latency improvement for inference compared to state-of-the-art GPUs. Compared to an application-specific memristor-based accelerator, PUMA incurs small energy overheads at similar inference latency and added programmability.
Why systolic architectures? First Page of the Article
The impact of DHT routing geometry on resilience and proximity The various proposed DHT routing algorithms embody several different underlying routing geometries. These geometries include hypercubes, rings, tree-like structures, and butterfly networks. In this paper we focus on how these basic geometric approaches affect the resilience and proximity properties of DHTs. One factor that distinguishes these geometries is the degree of flexibility they provide in the selection of neighbors and routes. Flexibility is an important factor in achieving good static resilience and effective proximity neighbor and route selection. Our basic finding is that, despite our initial preference for more complex geometries, the ring geometry allows the greatest flexibility, and hence achieves the best resilience and proximity performance.
A 9-bit, 14 μW and 0.06 mm 2 Pulse Position Modulation ADC in 90 nm Digital CMOS. This work presents a compact, low-power, time-based architecture for nanometer-scale CMOS analog-to-digital conversion. A pulse position modulation ADC architecture is proposed and a prototype 9 bit PPM ADC incorporating a two-step TDC scheme is presented as proof of concept. The 0.06 mm2 prototype is implemented in 90 nm CMOS and achieves 7.9 effective bits across the entire input bandwidth and d...
Nonlinear semidefinite programming: sensitivity, convergence, and an application in passive reduced-order modeling We consider the solution of nonlinear programs with nonlinear semidefiniteness constraints. The need for an efficient exploitation of the cone of positive semidefinite matrices makes the solution of such nonlinear semidefinite programs more complicated than the solution of standard nonlinear programs. This paper studies a sequential semidefinite programming (SSP) method, which is a generalization of the well-known sequential quadratic programming method for standard nonlinear programs. We present a sensitivity result for nonlinear semidefinite programs, and then based on this result, we give a self-contained proof of local quadratic convergence of the SSP method. We also describe a class of nonlinear semidefinite programs that arise in passive reduced-order modeling, and we report results of some numerical experiments with the SSP method applied to problems in that class.
A study of disturbance observers with unknown relative degree of the plant. Robust stability of the disturbance observer (DOB) control system is studied when the relative degree of the plant is not the same as that of the nominal model. The study reveals that the closed-loop system can easily become unstable with sufficiently fast Q-filter when the relative degree of the plant is not known. In a few cases of unknown relative degree, however, robust stability can be obtained, and we present a design guideline of the nominal model, as well as the Q-filter, for that purpose. Moreover, a universal design of DOB is given for a plant whose relative degree is uncertain but less than or equal to four.
OMNI: A Framework for Integrating Hardware and Software Optimizations for Sparse CNNs Convolution neural networks (CNNs) as one of today’s main flavor of deep learning techniques dominate in various image recognition tasks. As the model size of modern CNNs continues to grow, neural network compression techniques have been proposed to prune the redundant neurons and synapses. However, prior techniques disconnect the software neural networks compression and hardware acceleration, whi...
1.11
0.11
0.1
0.1
0.1
0.1
0.1
0.033333
0
0
0
0
0
0
Fast linear algebra-based triangle counting with KokkosKernels Triangle counting serves as a key building block for a set of important graph algorithms in network science. In this paper, we address the IEEE HPEC Static Graph Challenge problem of triangle counting, focusing on obtaining the best parallel performance on a single multicore node. Our implementation uses a linear algebra-based approach to triangle counting that has grown out of work related to our miniTri data analytics miniapplication [1] and our efforts to pose graph algorithms in the language of linear algebra. We leverage KokkosKernels to implement this approach efficiently on multicore architectures. Our performance results are competitive with the fastest known graph traversal-based approaches and are significantly faster than the Graph Challenge reference implementations, up to 670,000 times faster than the C++ reference and 10,000 times faster than the Python reference on a single Intel Haswell node.
k2-Trees for Compact Web Graph Representation This paper presents a Web graph representation based on a compact tree structure that takes advantage of large empty areas of the adjacency matrix of the graph. Our results show that our method is competitive with the best alternatives in the literature, offering a very good compression ratio (3.3---5.3 bits per link) while permitting fast navigation on the graph to obtain direct as well as reverse neighbors (2---15 microseconds per neighbor delivered). Moreover, it allows for extended functionality not usually considered in compressed graph representations.
Accelerating sparse matrix-vector multiplication on GPUs using bit-representation-optimized schemes The sparse matrix-vector (SpMV) multiplication routine is an important building block used in many iterative algorithms for solving scientific and engineering problems. One of the main challenges of SpMV is its memory-boundedness. Although compression has been proposed previously to improve SpMV performance on CPUs, its use has not been demonstrated on the GPU because of the serial nature of many compression and decompression schemes. In this paper, we introduce a family of bit-representation-optimized (BRO) compression schemes for representing sparse matrices on GPUs. The proposed schemes, BRO-ELL, BRO-COO, and BRO-HYB, perform compression on index data and help to speed up SpMV on GPUs through reduction of memory traffic. Furthermore, we formulate a BRO-aware matrix reodering scheme as a data clustering problem and use it to increase compression ratios. With the proposed schemes, experiments show that average speedups of 1.5× compared to ELLPACK and HYB can be achieved for SpMV on GPUs.
Compression-aware graph computation. Many recent work has focused on graph algorithms via parallelization including PowerGraph [9] and Ligra [14]. The frameworks process large graphs in shared memory, requiring a terabyte of memory and expensive maintenance cost. Reducing graph size to fit in memory thus is crucial in cutting the cost of large-scale graph computation. Compression has been widely used to reduce graph size. However, it could meanwhile compromise graph computation efficiency caused by nontrivial decompression overhead before graph computation. In this paper, we propose a simple and yet efficient coding scheme. It not only leads to smaller size of compressed graphs; meanwhile we can perform graph computation directly on the compressed graphs with no or partial decompression, namely compression-aware computation, leading to faster running time. Our experiments validate that the coding scheme achieves 2.99X compression ratio, and three compression-aware graph algorithms achieve 7.02X, 2.88X and 2.34X faster running time than the graph algorithms on the graphs without compression.
Warp-Consolidation: A Novel Execution Model for GPUs. With the unprecedented development of compute capability and extension of memory bandwidth on modern GPUs, parallel communication and synchronization soon becomes a major concern for continuous performance scaling. This is especially the case for emerging big-data applications. Instead of relying on a few heavily-loaded CTAs that may expose opportunities for intra-CTA data reuse, current technology and design trends suggest the performance potential of allocating more lightweighted CTAs for processing individual tasks more independently, as the overheads from synchronization, communication and cooperation may greatly outweigh the benefits from exploiting limited data reuse in heavily-loaded CTAs. This paper proceeds this trend and proposes a novel execution model for modern GPUs that hides the CTA execution hierarchy from the classic GPU execution model; meanwhile exposes the originally hidden warp-level execution. Specifically, it relies on individual warps to undertake the original CTAs' tasks. The major observation is that by replacing traditional inter-warp communication (e.g., via shared memory), cooperation (e.g., via bar primitives) and synchronizations (e.g., via CTA barriers), with more efficient intra-warp communication (e.g., via register shuffling), cooperation (e.g., via warp voting) and synchronizations (naturally lockstep execution) across the SIMD-lanes within a warp, significant performance gain can be achieved. We analyze the pros and cons for this design and propose corresponding solutions to counter potential negative effects. Experimental results on a diverse group of thirty-two representative applications show that our proposed Warp-Consolidation execution model can achieve an average speedup of 1.7x, 2.3x, 1.5x and 1.2x (up to 6.3x, 31x, 6.4x and 3.8x) on NVIDIA Kepler (Tesla-K80), Maxwell (Tesla-M40), Pascal (Tesla-P100) and Volta (Tesla-V100) GPUs, respectively, demonstrating its applicability and portability. Our approach can be directly employed to either transform legacy codes or write new algorithms on modern commodity GPUs.
Thinking Like a Vertex: A Survey of Vertex-Centric Frameworks for Large-Scale Distributed Graph Processing The vertex-centric programming model is an established computational paradigm recently incorporated into distributed processing frameworks to address challenges in large-scale graph processing. Billion-node graphs that exceed the memory capacity of commodity machines are not well supported by popular Big Data tools like MapReduce, which are notoriously poor performing for iterative graph algorithms such as PageRank. In response, a new type of framework challenges one to “think like a vertex” (TLAV) and implements user-defined programs from the perspective of a vertex rather than a graph. Such an approach improves locality, demonstrates linear scalability, and provides a natural way to express and compute many iterative graph algorithms. These frameworks are simple to program and widely applicable but, like an operating system, are composed of several intricate, interdependent components, of which a thorough understanding is necessary in order to elicit top performance at scale. To this end, the first comprehensive survey of TLAV frameworks is presented. In this survey, the vertex-centric approach to graph processing is overviewed, TLAV frameworks are deconstructed into four main components and respectively analyzed, and TLAV implementations are reviewed and categorized.
Impossibility of distributed consensus with one faulty process The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the “Byzantine Generals” problem.
A fast and elitist multiobjective genetic algorithm: NSGA-II Multi-objective evolutionary algorithms (MOEAs) that use non-dominated sorting and sharing have been criticized mainly for: (1) their O(MN3) computational complexity (where M is the number of objectives and N is the population size); (2) their non-elitism approach; and (3) the need to specify a sharing parameter. In this paper, we suggest a non-dominated sorting-based MOEA, called NSGA-II (Non-dominated Sorting Genetic Algorithm II), which alleviates all of the above three difficulties. Specifically, a fast non-dominated sorting approach with O(MN2) computational complexity is presented. Also, a selection operator is presented that creates a mating pool by combining the parent and offspring populations and selecting the best N solutions (with respect to fitness and spread). Simulation results on difficult test problems show that NSGA-II is able, for most problems, to find a much better spread of solutions and better convergence near the true Pareto-optimal front compared to the Pareto-archived evolution strategy and the strength-Pareto evolutionary algorithm - two other elitist MOEAs that pay special attention to creating a diverse Pareto-optimal front. Moreover, we modify the definition of dominance in order to solve constrained multi-objective problems efficiently. Simulation results of the constrained NSGA-II on a number of test problems, including a five-objective, seven-constraint nonlinear problem, are compared with another constrained multi-objective optimizer, and the much better performance of NSGA-II is observed
Language-based information-flow security Current standard security practices do not provide substantial assurance that the end-to-end behavior of a computing system satisfies important security policies such as confidentiality. An end-to-end confidentiality policy might assert that secret input data cannot be inferred by an attacker through the attacker's observations of system output; this policy regulates information flow. Conventional security mechanisms such as access control and encryption do not directly address the enforcement of information-flow policies. Previously, a promising new approach has been developed: the use of programming-language techniques for specifying and enforcing information-flow policies. In this paper, we survey the past three decades of research on information-flow security, particularly focusing on work that uses static program analysis to enforce information-flow policies. We give a structured view of work in the area and identify some important open challenges.
An Introduction To Compressive Sampling Conventional approaches to sampling signals or images follow Shannon&#39;s theorem: the sampling rate must be at least twice the maximum frequency present in the signal (Nyquist rate). In the field of data conversion, standard analog-to-digital converter (ADC) technology implements the usual quantized Shannon representation - the signal is uniformly sampled at or above the Nyquist rate. This article s...
Practical Mitigations for Timing-Based Side-Channel Attacks on Modern x86 Processors This paper studies and evaluates the extent to which automated compiler techniques can defend against timing-based side-channel attacks on modern x86 processors. We study how modern x86 processors can leak timing information through side-channels that relate to control flow and data flow. To eliminate key-dependent control flow and key-dependent timing behavior related to control flow, we propose the use of if-conversion in a compiler backend, and evaluate a proof-of-concept prototype implementation. Furthermore, we demonstrate two ways in which programs that lack key-dependent control flow and key-dependent cache behavior can still leak timing information on modern x86 implementations such as the Intel Core 2 Duo, and propose defense mechanisms against them.
A 93% efficiency reconfigurable switched-capacitor DC-DC converter using on-chip ferroelectric capacitors.
Current-mode adaptively hysteretic control for buck converters with fast transient response and improved output regulation This paper presents a current-mode adaptively hysteretic control (CMAHC) technique to achieve the fast transient response for DC-DC buck converters. A complementary full range current sensor comprising of chargingpath and discharging-path sensing transistors is proposed to track the inductor current seamlessly. With the proposed current-mode adaptively hysteretic topology, the inductor current is continuously monitored, and the adaptively hysteretic threshold is dynamically adjusted according to the feedback information comes from the output voltage level. Therefore, a fast load-transient response can be achieved. Besides, the output regulation performance is also improved by the proposed dynamic current-scaling circuitry (DCSC). Moreover, the proposed CMAHC topology can be used in a nearly zero RESR design configuration. The prototype fabricated using TSMC 0.25μm CMOS process occupies the area of 1.78mm2 including all bonding pads. Experimental results show that the output voltage ripple is smaller than 30mV over a wide loading current from 0 mA to 500 mA with maximum power conversion efficiency higher than 90%. The recovery time from light to heavy load (100 to 500 mA) is smaller than 5μs.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
Cmos Zero Cross-Conduction Low-Power Driver And Power Mosfets For Integrated Synchronous Buck Converter This paper presents a circuit and a design of a CMOS zero cross-conduction low-power driver for an integrated synchronous buck converter. The circuit has a zero cross-conduction, yielding low-power dissipation and high efficiency. We used the Cadence design tool with 0.18 mu M CMOS technology to implement and verify the functionalities of the low-power buck converter driver. The design criteria of these drivers depend on the power dissipation, delay, and physical size. The DC input voltage of this design is 1.2 V, the DC output voltage is IV, the output current is 120 mA, and the maximum operating frequency is up to 100 MHz. The total size of the driver is around 140 micron(2) Plus the area of an inductor and a capacitor. The designed converter achieved the required specification, and its efficiency is up to 97.9%. This circuit is more efficient than other previous low-power buck converter drivers. The proposed circuit is better than Lee's et al. and ours recent design in terms of efficiency, speed, and layout area. The new circuit is more than 4 times smaller than Lee's circuit. The circuit can be also used in I/O pads for high-current and low-power IC applications.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Just-In-Time Compilation for Verilog: A New Technique for Improving the FPGA Programming Experience FPGAs offer compelling acceleration opportunities for modern applications. However compilation for FPGAs is painfully slow, potentially requiring hours or longer. We approach this problem with a solution from the software domain: the use of a JIT. Code is executed immediately in a software simulator, and compilation is performed in the background. When finished, the code is moved into hardware, and from the user's perspective it simply gets faster. We have embodied these ideas in Cascade: the first JIT compiler for Verilog. Cascade reduces the time between initiating compilation and running code to less than a second, and enables generic printf debugging from hardware. Cascade preserves program performance to within 3× in a debugging environment, and has minimal effect on a finalized design. Crucially, these properties hold even for programs that perform side effects on connected IO devices. A user study demonstrates the value to experts and non-experts alike: Cascade encourages more frequent compilation, and reduces the time to produce working hardware designs.
Threaded code The concept of “threaded code” is presented as an alternative to machine language code. Hardware and software realizations of it are given. In software it is realized as interpretive code not needing an interpreter. Extensions and optimizations are mentioned.
SimpleScalar: An Infrastructure for Computer System Modeling Designers can execute programs on software models to validate a proposed hardware design's performance and correctness, while programmerscan use these models to develop and test software before the real hardwarebecomes available. Three critical requirements drive the implementationof a software model: performance, flexibility, and detail.Performance determines the amount of workload the model can exercise given the machine resources available for simulation. Flexibility indicates how well the model is structured to simplify modification, permitting design variants or even completely different designs to be modeled with ease. Detail defines the level of abstraction used to implement the model's components.The SimpleScalar tool set provides an infrastructure for simulation and architectural modeling. It can model a variety of platforms ranging from simple unpipelined processors to detailed dynamically scheduled microarchitectures with multiple-level memory hierarchies. SimpleScalar simulators reproduce computing device operations by executing all program instructions using an interpreter.The tool set's instruction inter-complex modern machines and effectively manage the large software projects needed to model such machines. Asim addresses these needs by providing a modular and reusable framework for creating many models. The framework's modularity helps break down the performance-modeling problem into individual pieces that can be modeled separately, while its reusability allows using a software component repeatedly in different contexts.
An approach to testing specifications An approach to testing the consistency of specifications is explored, which is applicable to the design validation of communication protocols and other cases of step-wise refinement. In this approach, a testing module compares a trace of interactions obtained from an execution of the refined specification (e. g. the protocol specification) with the reference specification (e. g. the communication service specification). Non-determinism in reference specifications presents certain problems. Using an extended finite state transition model for the specifications, a strategy for limiting the amount of non-determinacy is presented. An automated method for constructing a testing module for a given reference specification is discussed. Experience with the application of this testing approach to the design of a Transport protocol and a distributed mutual exclusion algorithm is described.
Scientific benchmarking of parallel computing systems: twelve ways to tell the masses when reporting performance results Measuring and reporting performance of parallel computers constitutes the basis for scientific advancement of high-performance computing (HPC). Most scientific reports show performance improvements of new techniques and are thus obliged to ensure reproducibility or at least interpretability. Our investigation of a stratified sample of 120 papers across three top conferences in the field shows that the state of the practice is lacking. For example, it is often unclear if reported improvements are deterministic or observed by chance. In addition to distilling best practices from existing work, we propose statistically sound analysis and reporting techniques and simple guidelines for experimental design in parallel computing and codify them in a portable benchmarking library. We aim to improve the standards of reporting research results and initiate a discussion in the HPC field. A wide adoption of our minimal set of rules will lead to better interpretability of performance results and improve the scientific culture in HPC.
OpenFPGA: An Opensource Framework Enabling Rapid Prototyping of Customizable FPGAs Driven by the strong need in data processing applications, Field Programmable Gate Arrays (FPGAs) are playing an ever-increasing role as programmable accelerators in modern computing systems. To fully unlock processing capabilities for domain-specific applications, FPGA architectures have to be tailored for seamless cooperation with other computing resources. However, prototyping and bringing to production a customized FPGA is a costly and complex endeavor even for industrial vendors. In this paper, we introduce OpenFPGA, an opensource framework that enables rapid prototyping of customizable FPGA architectures through a semi-custom design approach. We propose an XML-to-Prototype design flow, where the Verilog netlists of a full FPGA fabric can be autogenerated using an extension of the XML language from the VTR framework and then fed into a back-end flow to generate production-ready layouts. OpenFPGA also includes a general-purpose Verilog-to-Bitstream generator for any FPGA described by the XML language. We demonstrate the capability of this automatic design flow with a Stratix IV-like FPGA architecture using a commercial 40nm technology node, and perform a detailed comparison to its academic and commercial counterparts. Compared to the current state-of-art academic results, our FPGA fabric reduces the area by 1:75 and the delay by 3 on average. In addition, OpenFPGA significantly reduces the gap between semi-custom designed FPGAs and fully-optimized commercial products with a penalty of only 60% in area and 30% in delay, respectively.
Improved Abstractions and Turnaround Time for FPGA Design Validation and Debug Rapidly increasing FPGA density and complexity has heightened the need for higher levels of abstraction in validation and more rapid, focused approaches for design inspection. We present two methods of validating and debugging active, implemented FPGA designs running at target speeds. The first binds high-level software reference models directly to hardware enabling complex, automated, software-controlled testing scenarios, reducing the reliance on simulation. The second approach provides direct interactivity and visibility into a running FPGA design, enabling software-controlled breakpoints and arbitrary access to design registers. In-circuit breakpoints can be modified without the need to re-implement the entire design.
Unreliable failure detectors for reliable distributed systems We introduce the concept of unreliable failure detectors and study how they can be used to solve Consensus in asynchronous systems with crash failures. We characterise unreliable failure detectors in terms of two properties—completeness and accuracy. We show that Consensus can be solved even with unreliable failure detectors that make an infinite number of mistakes, and determine which ones can be used to solve Consensus despite any number of crashes, and which ones require a majority of correct processes. We prove that Consensus and Atomic Broadcast are reducible to each other in asynchronous systems with crash failures; thus, the above results also apply to Atomic Broadcast. A companion paper shows that one of the failure detectors introduced here is the weakest failure detector for solving Consensus [Chandra et al. 1992].
Federated Learning: Challenges, Methods, and Future Directions Federated learning involves training statistical models over remote devices or siloed data centers, such as mobile phones or hospitals, while keeping data localized. Training in heterogeneous and potentially massive networks introduces novel challenges that require a fundamental departure from standard approaches for large-scale machine learning, distributed optimization, and privacy-preserving data analysis. In this article, we discuss the unique characteristics and challenges of federated learning, provide a broad overview of current approaches, and outline several directions of future work that are relevant to a wide range of research communities.
Incremental Validation of XML Documents We investigate the incremental validation of XML documents with respect to DTDs and XML Schemas, under updates consisting of element tag renamings, insertions and deletions. DTDs are modeled as extended context-free grammars and XML Schemas are abstracted as "specialized DTDs", allowing to decouple element types from element tags. For DTDs, we exhibit an O(m log n) incremental validation algorithm using an auxiliary structure of size O(n), where n is the size of the document and m the number of updates. For specialized DTDs, we provide an O(m log2 n) incremental algorithm, again using an auxiliary structure of size O(n). This is a significant improvement over brute-force re-validation from scratch.
An MLSE Receiver for Electronic Dispersion Compensation of OC-192 Fiber Links A maximum-likelihood sequence estimation (MLSE) receiver is fabricated to combat dispersion/intersymbol interference (chromatic and polarization mode), noise (optical and electrical), and nonlinearities (e.g., fiber, receiver photodiode, or laser) in OC-192 metro and long-haul links. The MLSE receiver includes a variable gain amplifier with 40-dB gain range and 7.5-GHz 3-dB bandwidth, a 12.5-Gb/s ...
Secure download system based on software defined radio composed of FPGAs We focus attention on the development of security techniques using software defined radio (SDR) technologies. We propose a new secure download system which uses the characteristics of the field programmable gate arrays (FPGAs) composing the SDR. The proposed system has the novelty that realization of high security encipherment is possible. This is achieved using the characteristic of FPGAs which allows systems to be arranged in a variety of different layouts, as well as by using the configuration information as the key. This unifies the renewal of the key and the encipherment. In addition the proposed system has the merit that it has high security against illegal acquisition such as a wiretapping, and can also be used in conjunction with any other current cipher algorithm. As an evaluation of the security, we show that the proposed system has high immunity to illegal acquisition of software using replay attack, by verification of the protocol as well as by numerical computation. The proposed system can therefore realize high security software downloads based on SDR.
An efficient low-cost fixed-point digital down converter with modified filter bank In radar system, as the most important part of IF radar receiver, digital down converter (DDC) extracts the baseband signal needed from modulated IF signal, and down-samples the signal with decimation factor of 20. This paper proposes an efficient low-cost structure of DDC, including NCO, mixer and a modified filter bank. The modified filter bank adopts a high-efficiency structure, including a 5-stage CIC filter, a 9-tap CFIR filter and a 15-tap HB filter, which reduces the complexity and cost of implementation compared with the traditional filter bank. Then an optimized fixed-point programming is designed in order to implement DDC on fixed-point DSP or FPGA. The simulation results show that the proposed DDC achieves an expectant specification in application of IF radar receiver.
A 0.5 V 10-bit 3 MS/s SAR ADC With Adaptive-Reset Switching Scheme and Near-Threshold Voltage-Optimized Design Technique This brief presents a 10-bit ultra-low power energy-efficient successive approximation register (SAR) analog-to-digital converter (ADC). A new adaptive-reset switching scheme is proposed to reduce the switching energy of the capacitive digital-to-analog converter (CDAC). The proposed adaptive-reset switching scheme reduces the average switching energy of the CDAC by 90% compared to the conventional scheme without the common-mode voltage variation. In addition, the near-threshold voltage (NTV)-optimized digital library is adopted to alleviate the performance degradation in the ultra-low supply voltage while simultaneously increasing the energy efficiency. The NTV-optimized design technique is also introduced to the bootstrapped switch design to improve the linearity of the sample-and-hold circuit. The test chip is fabricated in a 65 nm CMOS, and its core area is 0.022 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> . At a supply of 0.5 V and sampling speed of 3 MS/s, the SAR ADC achieves an ENOB of 8.78 bit and consumes <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$3.09~{\boldsymbol{\mu }}\text{W}$ </tex-math></inline-formula> . The resultant Walden figure-of-merit (FoM) is 2.34 fJ/conv.-step.
1.11
0.1
0.1
0.1
0.1
0.1
0.06
0
0
0
0
0
0
0
A 1.75-GHz polar modulated CMOS RF power amplifier for GSM-EDGE This work presents a fully integrated linearized CMOS RF amplifier, integrated in a 0.18-/spl mu/m CMOS process. The amplifier is implemented on a single chip, requiring no external matching or tuning networks. Peak output power is 27 dBm with a power-added efficiency (PAE) of 34%. The amplitude modulator, implemented on the same chip as the RF amplifier, modulates the supply voltage of the RF amp...
An intelligent power amplifier MMIC using a new adaptive bias control circuit for W-CDMA applications A high-linearity and high-efficiency MMIC power amplifier is proposed that adopts a new on-chip adaptive bias con- trol circuit, which simultaneously improves efficiency at the low output power level and linearity at the high output power level. The adaptive bias control circuit detects the input power level and sup- plies a low quiescent current of 16 mA at the low output power level and an increased current up to 90 mA according to the increased power level adaptively. The intelligent W-CDMA power amplifier using the adaptive bias circuit exhibits an improvement of average power usage efficiency of more than 1.93 times, and an adjacent channel leakage ratio by 4 dB at the output power of 28.3 dBm. Index Terms—Adaptive bias control, heterojunction bipolar transistor (HBT), high efficiency, high linearity, MMIC, power amplifier, wide-band code-division multiple access (W-CDMA).
Quantization Noise Suppression in Digitally Segmented Amplifiers In this paper, we consider the problem of out-of-band quantization noise suppression in the general family of direct digital-to-RF (DDRF) conversion circuits, where the RF carrier is amplitude modulated by a quantized representation of the baseband signal. Hence, it is desired to minimize the out-of-band quantization noise in order to meet stringent requirements such as receive-band noise levels in frequency-division duplex transceivers. In this paper, we address the problem of out-of-band quantization noise by introducing a novel signal-processing solution, which we refer to as ldquosegmented filtering (SF).rdquo We assess the capability of the proposed SF solution by means of performance analysis and results that have been obtained via circuit-level computer simulations as well as laboratory measurements. Our proposed approach has demonstrated the ability to preserve the required signal quality and power amplifier (PA) efficiency while providing more than 35-dB attenuation of the quantization noise, thus eliminating the need for substantial post-PA passband RF filtering.
A fully differential ultra-compact broadband transformer based quadrature generation scheme This paper presents an ultra-compact transformer-based quadrature generation scheme, which converts a differential input signal to fully differential quadrature outputs with low passive loss, broad bandwidth, and robustness against process variations. A new layout strategy is proposed to implement this 6-port transformer-based network within only one inductor-footprint for significant area saving. A 5 GHz quadrature generation design is implemented in a standard 65 nm CMOS process with a core area of only 260 μm by 260 μm, achieving size reduction of over 1,600 times compared to a 5GHz λ/4 branch-line coupler. This implementation achieves 0.82 dB signal loss at 5 GHz and maximum 3.8° phase error and ±0.5dB amplitude mismatch within a bandwidth of 13% (4.75 GHz to 5.41 GHz). Measurement results over 9 independent samples show a standard phase deviation of 1.9° verifying the robustness of the design.
A 1.95 GHz Fully Integrated Envelope Elimination and Restoration CMOS Power Amplifier Using Timing Alignment Technique for WCDMA and LTE A fully integrated envelope elimination and restoration (EER) CMOS power amplifier (PA) has been developed for WCDMA and LTE handsets. EER is a supply modulation technique that first divides modulated RF signal into envelope and phase signals and then restores it at a switching PA output. Supply voltage of the switching PA is modulated by the envelope signal through a high-speed supply modulator. EER PA is highly efficient due to the switching PA and the supply modulation. However, it generally has difficulty, especially for a wide bandwidth baseband application like LTE, achieving a wide bandwidth for phase signal path and highly accurate timing between envelope and phase signals. To overcome these challenges, an envelope/phase generator based on a mixer and a limiter was proposed to generate the wide bandwidth phase signal, and a timing aligner based on a delay locked loop with a variable high-pass filter (HPF) was proposed to compensate for the timing mismatch. The chip was implemented in 90 nm CMOS technology. Measured power-added efficiency (PAE) and adjacent channel leakage ratio (ACLR) were 39% and -41 dBc for WCDMA, and measured PAE and ACLR E-UTRA1 were 32% and -33 dBc for 20 MHz-BW LTE.
A Digitally Modulated Polar CMOS Power Amplifier With a 20-MHz Channel Bandwidth This paper presents a CMOS RF power amplifier that employs a digital polar architecture to improve the overall power efficiency when amplifying signals with high linearity requirements. The power amplifier comprises 64 parallel RF amplifiers that are driven by a constant envelope RF phase-modulated signal. The unit amplifiers are digitally activated by a 6-bit envelope code to construct a non-constant envelope RF output, thereby performing a digital-to-RF conversion. In order to suppress the spectral images resulting from the discrete-time to continuous-time conversion of the envelope, the use of oversampling and four-fold linear interpolation is explored. An experimental prototype of the polar amplifier has been integrated in a 0.18- mum CMOS technology, occupies a total die area of 1.8 mm2 , operates at a 1.6-GHz carrier frequency with a channel bandwidth of 20 MHz. For an OFDM signal, it achieves a power-added efficiency of 6.7% with an EVM of - 26.8 dB while delivering 13.6 dBm of linear output power and drawing 145 mA from a 1.7-V supply.
An Efficient Mixed-Signal 2.4-GHz Polar Power Amplifier in 65-nm CMOS Technology A 65-nm digitally modulated polar transmitter incorporates a fully integrated, efficient 2.4-GHz switching Inverse Class-D power amplifier. Low-power digital filtering on the amplitude path helps remove spectral images for coexistence. The transmitter integrates the complete LO distribution network and digital drivers. Operating from a 1-V supply, the PA has 21.8-dBm peak output power with 44% efficiency. Simple static predistortion helps the transmitter meet EVM and mask requirements of 802.11g 54-Mb/s WLAN data with 18% average efficiency.
2.8 A Class-G voltage-mode Doherty power amplifier In modern communication, wideband and high-spectral-efficiency modulation results in high peak-to-average power ratio (PAPR), up to 8 to 10dB. Well-known PA-efficiency-enhancement techniques, such as Doherty and outphasing, offer reduced efficiency improvement beyond 6dB back-off, limiting the efficiency enhancement obtainable with high PAPR modulation. Recent works have shown that a combination of different techniques can result in improved efficiency well beyond 6dB back-off. However, these combined techniques have come at a cost of glitches due to mode-transitions, when power supply voltage or load impedance undergo large variations at critical power levels. Switching between power supply voltages causes significant glitches, which degrade the EVM and ACPR of the transmitted signal. Reasonable EVM is achieved, by reducing the average output power so that power supply switching is less frequent. A “skipping window” technique is proposed in the work of Tai, et al. (2012) to skip high-frequency mode-transitions reducing overall glitching. While this improves the ACPR, the efficiency is degraded since there is no enhancement during a skipped transition. In this work, the authors demonstrate the combination of the voltage-mode Doherty (VMD) and Class-G switched-capacitor power amplifier (SCPA) techniques. Together, these techniques provide efficiency peaking at both 6 and 12dB backoff over a wide bandwidth without introducing the glitches present in previous works. The Class-G VMD with integrated matching network achieves a PAE of 24% and better than -32dB EVM for 256-QAM OFDM signals.
A monolithic current-mode CMOS DC-DC converter with on-chip current-sensing technique A monolithic current-mode CMOS DC-DC converter with integrated power switches and a novel on-chip current sensor for feedback control is presented in this paper. With the proposed accurate on-chip current sensor, the sensed inductor current, combined with the internal ramp signal, can be used for current-mode DC-DC converter feedback control. In addition, no external components and no extra I/O pins are needed for the current-mode controller. The DC-DC converter has been fabricated with a standard 0.6-μm CMOS process. The measured absolute error between the sensed signal and the inductor current is less than 4%. Experimental results show that this converter with on-chip current sensor can operate from 300 kHz to 1 MHz with supply voltage from 3 to 5.2 V, which is suitable for single-cell lithium-ion battery supply applications. The output ripple voltage is about 20 mV with a 10-μF off-chip capacitor and 4.7-μH off-chip inductor. The power efficiency is over 80% for load current from 50 to 450 mA.
How to share a secret In this paper we show how to divide data D into n pieces in such a way that D is easily reconstructable from any k pieces, but even complete knowledge of k - 1 pieces reveals absolutely no information about D. This technique enables the construction of robust key management schemes for cryptographic systems that can function securely and reliably even when misfortunes destroy half the pieces and security breaches expose all but one of the remaining pieces.
Difference engine: harnessing memory redundancy in virtual machines Virtual machine monitors (VMMs) are a popular platform for Internet hosting centers and cloud-based compute services. By multiplexing hardware resources among virtual machines (VMs) running commodity operating systems, VMMs decrease both the capital outlay and management overhead of hosting centers. Appropriate placement and migration policies can take advantage of statistical multiplexing to effectively utilize available processors. However, main memory is not amenable to such multiplexing and is often the primary bottleneck in achieving higher degrees of consolidation. Previous efforts have shown that content-based page sharing provides modest decreases in the memory footprint of VMs running similar operating systems and applications. Our studies show that significant additional gains can be had by leveraging both subpage level sharing (through page patching) and incore memory compression. We build Difference Engine, an extension to the Xen VMM, to support each of these---in addition to standard copy-on-write full-page sharing---and demonstrate substantial savings across VMs running disparate workloads (up to 65%). In head-to-head memory-savings comparisons, Difference Engine outperforms VMware ESX server by a factor 1.6--2.5 for heterogeneous workloads. In all cases, the performance overhead of Difference Engine is less than 7%.
Optimal regional consecutive leader election in mobile ad-hoc networks The regional consecutive leader election (RCLE) problem requires mobile nodes to elect a leader within bounded time upon entering a specific region. We prove that any algorithm requires Ω(Dn) rounds for leader election, where D is the diameter of the network and n is the total number of nodes. We then present a fault-tolerant distributed algorithm that solves the RCLE problem and works even in settings where nodes do not have access to synchronized clocks. Since nodes set their leader variable within O(Dn) rounds, our algorithm is asymptotically optimal with respect to time complexity. Due to its low message bit complexity, we believe that our algorithm is of practical interest for mobile wireless ad-hoc networks. Finally, we present a novel and intuitive constraint on mobility that guarantees a bounded communication diameter among nodes within the region of interest.
Exploration of Constantly Connected Dynamic Graphs Based on Cactuses. We study the problem of exploration by a mobile entity (agent) of a class of dynamic networks, namely constantly connected dynamic graphs. This problem has already been studied in the case where the agent knows the dynamics of the graph and the underlying graph is a ring of n vertices [5]. In this paper, we consider the same problem and we suppose that the underlying graph is a cactus graph (a connected graph in which any two simple cycles have at most one vertex in common). We propose an algorithm that allows the agent to explore these dynamic graphs in at most 2(O)(root log n)(n) time units. We show that the lower bound of the algorithm is 2(Omega)(root log n)(n) time units.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.01279
0.01193
0.011929
0.011908
0.011314
0.006113
0.004366
0.000749
0.000001
0
0
0
0
0
Optimal Policies for the Management of an Electric Vehicle Battery Swap Station AbstractOptimizing operations at electric vehicle EV battery swap stations is internally motivated by the movement to make transportation cleaner and more efficient. An EV battery swap station allows EV owners to quickly exchange their depleted battery for a fully charged battery. We introduce the EV Battery-Swap Station Management Problem EVB-SSMP, which models battery charging and discharging operations at an EV battery swap station facing nonstationary, stochastic demand for battery swaps, nonstationary prices for charging depleted batteries, and nonstationary prices for discharging fully charged batteries. Discharging through vehicle-to-grid is beneficial for aiding power load balancing. The objective of the EVB-SSMP is to determine the optimal policy for charging and discharging batteries that maximizes expected total profit over a fixed time horizon. The EVB-SSMP is formulated as a finite-horizon, discrete-time Markov decision problem and an optimal policy is found using dynamic programming. We derive structural properties, to include sufficiency conditions that ensure the existence of a monotone optimal policy. Utilizing available demand and electricity pricing data, we design and conduct two main computational experiments to obtain policy insights regarding the management of EV battery swap stations. We compare the optimal policy to two benchmark policies that are easily implementable by swap station managers. Policy insights include the relationship between the minimum battery level and the number of EVs in a local service area, the pricing incentive necessary to encourage effective discharge behavior, and the viability of EV battery swap stations under many conditions.The online appendix is available at https://doi.org/10.1287/trsc.2016.0676.
Online Station Assignment for Electric Vehicle Battery Swapping This paper investigates the online station assignment for (commercial) electric vehicles (EVs) that request battery swapping from a central operator, i.e., in the absence of future information a battery swapping service station has to be assigned instantly to each EV upon its request. Based on EVs’ locations, the availability of fully-charged batteries at service stations in the system, as well as...
A BCMP network approach to modeling and controlling autonomous mobility-on-demand systems AbstractIn this paper we present a queuing network approach to the problem of routing and rebalancing a fleet of self-driving vehicles providing on-demand mobility within a capacitated road network. We refer to such systems as autonomous mobility-on-demand (AMoD) systems. We first cast an AMoD system into a closed, multi-class Baskett–Chandy–Muntz–Palacios (BCMP) queuing network model capable of capturing the passenger arrival process, traffic, the state-of-charge of electric vehicles, and the availability of vehicles at the stations. Second, we propose a scalable method for the synthesis of routing and charging policies, with performance guarantees in the limit of large fleet sizes. Third, we explore the applicability of our theoretical results on a case study of Manhattan. Collectively, this paper provides a unifying framework for the analysis and control of AMoD systems, which provides a large set of modeling options (e.g. the inclusion of road capacities and charging constraints), and subsumes earlier Jackson and network flow models.
Electric Vehicle Route Selection and Charging Navigation Strategy Based on Crowd Sensing. This paper has proposed an electric vehicle (EV) route selection and charging navigation optimization model, aiming to reduce EV users&#39; travel costs and improve the load level of the distribution system concerned. Moreover, with the aid of crowd sensing, a road velocity matrix acquisition and restoration algorithm is proposed. In addition, the waiting time at charging stations is addressed based o...
Ride-Hailing Order Dispatching At Didi Via Reinforcement Learning Order dispatching is instrumental to the marketplace engine of a large-scale ride-hailing platform, such as the DiDi platform, which continuously matches passenger trip requests to drivers at a scale of tens of millions per day. Because of the dynamic and stochastic nature of supply and demand in this context, the ride-hailing order-dispatching problem is challenging to solve for an optimal solution. Added to the complexity are considerations of system response time, reliability, and multiple objectives. In this paper, we describe how our approach to this optimization problem has evolved from a combinatorial optimization approach to one that encompasses a semi-Markov decision-process model and deep reinforcement learning. We discuss the various practical considerations of our solution development and real-world impact to the business.
Optimal Charging Operation of Battery Swapping and Charging Stations With QoS Guarantee. Motivated by the urgent demand for the electric vehicle (EV) fast refueling technologies, battery swapping and charging stations (BSCSs) are envisioned as a promising solution to provide timely EV refueling services. However, inappropriate battery charging operation in BSCSs cannot only incur unnecessary high charging cost but also threaten the reliability of the power grid. In this paper, we aim ...
Unreliable failure detectors for reliable distributed systems We introduce the concept of unreliable failure detectors and study how they can be used to solve Consensus in asynchronous systems with crash failures. We characterise unreliable failure detectors in terms of two properties—completeness and accuracy. We show that Consensus can be solved even with unreliable failure detectors that make an infinite number of mistakes, and determine which ones can be used to solve Consensus despite any number of crashes, and which ones require a majority of correct processes. We prove that Consensus and Atomic Broadcast are reducible to each other in asynchronous systems with crash failures; thus, the above results also apply to Atomic Broadcast. A companion paper shows that one of the failure detectors introduced here is the weakest failure detector for solving Consensus [Chandra et al. 1992].
Tensor Decompositions and Applications This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or $N$-way array. Decompositions of higher-order tensors (i.e., $N$-way arrays with $N \geq 3$) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors.
Random Forests Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R. Schapire, Machine Learning: Proceedings of the Thirteenth International conference, &ast;&ast;&ast;, 148–156), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression.
Fundamental control algorithms in mobile networks In this work we propose simple and efficient protocols forcounting and leader election in mobile networks. For mobilenetworks with fixed base stations we provide a new andvery efficient protocol for counting the number of mobilehosts. The main part of the work concentrates on ad-hocnetworks (no fixed subnetwork). We provide a model forthese networks and leader election (and a special form ofcounting) protocols for both named and anonymous mobilehosts. In this work we define two protocol ...
Practical Mitigations for Timing-Based Side-Channel Attacks on Modern x86 Processors This paper studies and evaluates the extent to which automated compiler techniques can defend against timing-based side-channel attacks on modern x86 processors. We study how modern x86 processors can leak timing information through side-channels that relate to control flow and data flow. To eliminate key-dependent control flow and key-dependent timing behavior related to control flow, we propose the use of if-conversion in a compiler backend, and evaluate a proof-of-concept prototype implementation. Furthermore, we demonstrate two ways in which programs that lack key-dependent control flow and key-dependent cache behavior can still leak timing information on modern x86 implementations such as the Intel Core 2 Duo, and propose defense mechanisms against them.
Fully Integrated CMOS Power Amplifier With Efficiency Enhancement at Power Back-Off This paper presents a new approach for power amplifier design using deep submicron CMOS technologies. A transformer based voltage combiner is proposed to combine power generated from several low-voltage CMOS amplifiers. Unlike other voltage combining transformers, the architecture presented in this paper provides greater flexibility to access and control the individual amplifiers in a voltage comb...
An energy-efficient VLSI architecture for pattern recognition via deep embedding of computation in SRAM In this paper, we propose the concept of compute memory, where computation is deeply embedded into the memory (SRAM). This deep embedding enables multi-row read access and analog signal processing. Compute memory exploits the relaxed precision and linearity requirements of pattern recognition applications. System-level simulations incorporating various deterministic errors from analog signal chain demonstrates the limited accuracy of analog processing does not significantly degrade the system performance, which means the probability of pattern detection is minimally impacted. The estimated energy saving is 63 % as compared to the conventional system with standard embedded memory and parallel processing architecture, for 256×256 target image.
Neuropixels Data-Acquisition System: A Scalable Platform for Parallel Recording of 10,000+ Electrophysiological Signals. Although CMOS fabrication has enabled a quick evolution in the design of high-density neural probes and neural-recording chips, the scaling and miniaturization of the complete data-acquisition systems has happened at a slower pace. This is mainly due to the complexity and the many requirements that change depending on the specific experimental settings. In essence, the fundamental challenge of a n...
1.1
0.1
0.1
0.1
0.1
0.066667
0
0
0
0
0
0
0
0
Mismatch-based timing errors in current steering DACs Current Steering Digital-to-Analog Converters (CS-DAC) are important ingredients in many high-speed data converters. Various types of timing errors such as mismatch based timing errors limit broad-band performance. A framework of timing errors is presented here and it is used to analyze these errors. The extracted relationship between performance, block requirements and architecture (e.g segmentation) gives insight on design tradeoffs in Nyquist DACs and multi-bit current-based ΣΔ Modulators.
Bandwidth Limitation for the Constant Envelope Components of an OFDM Signal in a LINC Architecture The linear amplification using non-linear components (LINC) power amplifier architecture uses saturated amplifiers to achieve high efficiency and linearity. However, the non-linear operations for LINC separation lead to bandwidth expansion. The expanded bandwidth causes problems for the analog circuitry and effectively places a limit on the modulation bandwidth of the transmitted signal. We show that for multi-carrier signals, it is the phase component of the transmitted modulation that dominates the spectrum of the LINC components. A novel bandwidth reduction scheme is then proposed for these components. The scheme repetitively limits the bandwidth of the LINC components while inhibiting envelope variations. Measurements show a 46% bandwidth reduction while still meeting the WLAN spectrum mask. This implies an increase of 85% in the modulation bandwidth of the transmitted signal.
SFDR-bandwidth limitations for high speed high resolution current steering CMOS D/A converters Although very high update rates are achieved in recent publications on high resolution D/A converters, the bottleneck in the design is to achieve a high spurious free output signal bandwidth. The influence of the dynamic output impedance on the chip performance has been analyzed and has been identified as an important limitation for the spurious free dynamic range (SFDR) of high resolution DAC's. Based on the presented analysis an optimized topology is proposed
Analysis of a 5.5-V Class-D Stage Used in 30-dBm Outphasing RF PAs in 130- and 65-nm CMOS This brief presents the design and analysis of a 5.5-V class-D stage used in two fully integrated watt-level, +32.0 and + 29.7 dBm, outphasing RF power amplifiers (PAs) in standard 130- and 65-nm CMOS technologies. The class-D stage utilizes a cascode configuration, driven by an ac-coupled low-voltage driver, to allow a 5.5-V supply in the 1.2-/2.5-V technologies without excessive device voltage stress. The rms electric fields (E) across the gate oxides and the optimal bias point, where the voltage stress is equally divided between the transistors, are computed. At the optimal bias point, the rms E, the power dissipation of the parasitic drain capacitance of the common-source transistors, and the equivalent on-resistances are reduced by approximately 25%, 50%, and 25%, compared to a conventional cascode (inverter) stage. To the authors' best knowledge, the class-D PAs presented are among the first fully integrated CMOS outphasing PAs reaching +30 dBm and demonstrate state-of-the-art output power and bandwidth.
A Sensitive Method to Measure the Integral Non-Linearity of a Digital-to-Time Converter based on Phase Modulation A Digital-to-Time Converter (DTC) produces a time delay based on a digital code. Like for data converters, linearity is a key metric for a DTC and it can be characterized by its Integral Non-Linearity (INL). However, measuring the INL of a sub-ps resolution DTC is problematic even when using the best available high-speed oscilloscopes. In this paper, we propose a new method to measure the INL of a DTC, by applying digital phase modulation and measuring the output spectrum with a spectrum analyzer. The frequency selectivity of this method allows for improved measurement resolution down to a few fs and allows to measure a INL below 100 fs. The proposed method is verified by behavioral simulations and employed to measure the INL of a high-resolution DTC realized in 65nm CMOS, with time-resolution of 25 fs and standard deviation of 27 fs.
An Efficient Mixed-Signal 2.4-GHz Polar Power Amplifier in 65-nm CMOS Technology A 65-nm digitally modulated polar transmitter incorporates a fully integrated, efficient 2.4-GHz switching Inverse Class-D power amplifier. Low-power digital filtering on the amplitude path helps remove spectral images for coexistence. The transmitter integrates the complete LO distribution network and digital drivers. Operating from a 1-V supply, the PA has 21.8-dBm peak output power with 44% efficiency. Simple static predistortion helps the transmitter meet EVM and mask requirements of 802.11g 54-Mb/s WLAN data with 18% average efficiency.
A Switched-Capacitor RF Power Amplifier A fully integrated switched-capacitor power amplifier (SCPA) utilizes switched-capacitor techniques in an EER/Polar architecture. It operates on the envelope of a nonconstant envelope modulated signal as an RF-DAC in order to amplify the signal efficiently. The measured maximum output power and PAE are 25.2 dBm and 45%, respectively. When amplifying an 802.11g 64-QAM orthogonal frequency-division multiplexing (OFDM) signal, the measured error vector magnitude is 2.6% and the average output power and power-added efficiencies are 17.7 dBm and 27%, respectively.
A 90-nm CMOS Doherty power amplifier with minimum AM-PM distortion A linear Doherty amplifier is presented. The design reduces AM-PM distortion by optimizing the device-size ratio of the carrier and peak amplifiers to cancel each other's phase variation. Consequently, this design achieves both good linearity and high backed-off efficiency associated with the Doherty technique, making it suitable for systems with large peak-to-average power ratio (WLAN, WiMAX, etc.). The fully integrated design has on-chip quadrature hybrid coupler, impedance transformer, and output matching networks. The experimental 90-nm CMOS prototype operating at 3.65 GHz achieves 12.5% power-added efficiency (PAE) at 6 dB back-off, while exceeding IEEE 802.11a -25 dB error vector magnitude (EVM) linearity requirement (using 1.55-V supply). A 28.9 dBm maximum Psat is achieved with 39% PAE (using 1.85-V supply). The active die area is 1.2 mm2.
A Fast and High Quality Multilevel Scheme for Partitioning Irregular Graphs Recently, a number of researchers have investigated a class of graph partitioning algorithms that reduce the size of the graph by collapsing vertices and edges, partition the smaller graph, and then uncoarsen it to construct a partition for the original graph (Bui and Jones, Proc. of the 6th SIAM Conference on Parallel Processing for Scientific Computing, 1993, 445-452; Hen- drickson and Leland, A Multilevel Algorithm for Partitioning Graphs, Tech. report SAND 93-1301, Sandia National Laboratories, Albuquerque, NM, 1993). From the early work it was clear that multilevel techniques held great promise; however, it was not known if they can be made to con- sistently produce high quality partitions for graphs arising in a wide range of application domains. We investigate the effectiveness of many different choices for all three phases: coarsening, partition of the coarsest graph, and refinement. In particular, we present a new coarsening heuristic (called heavy-edge heuristic) for which the size of the partition of the coarse graph is within a small factor of the size of the final partition obtained after multilevel refinement. We also present a much faster variation of the Kernighan-Lin (KL) algorithm for refining during uncoarsening. We test our scheme on a large number of graphs arising in various domains including finite element methods, linear pro- gramming, VLSI, and transportation. Our experiments show that our scheme produces partitions that are consistently better than those produced by spectral partitioning schemes in substantially smaller time. Also, when our scheme is used to compute fill-reducing orderings for sparse matrices, it produces orderings that have substantially smaller fill than the widely used multiple minimum degree algorithm.
Understanding churn in peer-to-peer networks The dynamics of peer participation, or churn, are an inherent property of Peer-to-Peer (P2P) systems and critical for design and evaluation. Accurately characterizing churn requires precise and unbiased information about the arrival and departure of peers, which is challenging to acquire. Prior studies show that peer participation is highly dynamic but with conflicting characteristics. Therefore, churn remains poorly understood, despite its significance.In this paper, we identify several common pitfalls that lead to measurement error. We carefully address these difficulties and present a detailed study using three widely-deployed P2P systems: an unstructured file-sharing system (Gnutella), a content-distribution system (BitTorrent), and a Distributed Hash Table (Kad). Our analysis reveals several properties of churn: (i) overall dynamics are surprisingly similar across different systems, (ii) session lengths are not exponential, (iii) a large portion of active peers are highly stable while the remaining peers turn over quickly, and (iv) peer session lengths across consecutive appearances are correlated. In summary, this paper advances our understanding of churn by improving accuracy, comparing different P2P file sharingdistribution systems, and exploring new aspects of churn.
A g/sub m//I/sub D/ based methodology for the design of CMOS analog circuits and its application to the synthesis of a silicon-on-insulator micropower OTA A new design methodology based on a unified treatment of all the regions of operation of the MOS transistor is proposed. It is intended for the design of CMOS analog circuits and especially suited for low power circuits where the moderate inversion region often is used because it provides a good compromise between speed and power consumption. The synthesis procedure is based on the relation betwee...
Bidirectional current-mode capacitor multiplier in DC-DC converter compensation Bidirectional current-mode capacitor multipliers for on-chip DC-DC converter compensation are presented in this paper. The increasing demand for portable devices is a driving force toward higher integration. Reducing physical area with the same or better performance is carried out. Based on TSMC 0.35μm technology, we demonstrate that a small capacitor is multiplied by a factor about 200. It allows the control system compensating circuit of DC-DC converter be easily integrated on a chip and occupy less silicon area. The experimental results show that the DC-DC converter with proposed architecture is stable for wide loading conditions from 10mA to 400mA while input voltage is 3.3V and output voltage is 2.0V. The quiescent current consumed is 9μA by single-ended capacitor multiplier and 19μA by two-ended capacitor multiplier.
27.9 A 200kS/s 13.5b integrated-fluxgate differential-magnetic-to-digital converter with an oversampling compensation loop for contactless current sensing High voltage applications such as electric motor controllers, solar panel power inverters, electric vehicle battery chargers, uninterrupted and switching mode power supplies benefit from the galvanic isolation of contactless current sensors (CCS) [1]. These include magnetic sensors that sense the magnetic field emanating from a current-carrying conductor. The offset and resolution of Hall-effect sensors is in the μT-level [1-3], in contrast to the μT-level accuracy of integrated-fluxgate (IFG) magnetometers [4]. Previously reported sampled-data closed-loop IFG readouts have limited BWs as their sampling frequencies (4) are limited to be less than or equal to the IFG excitation frequency, fEXC [5-7]. This paper describes a differential closed-loop IFG CCS with fs>fEXC. The differential architecture rejects magnetic stray fields and achieves 750x larger BW than the prior closed-loop IFG readouts [6-7] with 10×better offset than the Hall-effect sensors [1-3].
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.074133
0.073889
0.073889
0.073889
0.036944
0.006441
0.000348
0.000127
0
0
0
0
0
0
A 4-Phase 30–70 MHz Switching Frequency Buck Converter Using a Time-Based Compensator A high switching frequency multi-phase buck converter architecture using a time-based compensator is presented. Efficiency degradation due to mismatch between the phases is mitigated by generating precisely matched duty-cycles by combining a time-based multi-phase generator (MPG) with a time-based PID compensator (T-PID). The proposed approach obviates the need for a complex current sensing and ca...
System-on-Chip: Reuse and Integration Over the past ten years, as integrated circuits became increasingly more complex and expensive, the industry began to embrace new design and reuse methodologies that are collectively referred to as system-on-chip (SoC) design. In this paper, we focus on the reuse and integration issues encountered in this paradigm shift. The reusable components, called intellectual property (IP) blocks or cores, a...
A 300mA 14mV-ripple digitally controlled buck converter using frequency domain ΔΣ ADC and hybrid PWM generator A 0.18 μm CMOS digitally controlled DC-DC buck converter is presented. An all-digital 8 b frequency-domain ΔΣ ADC is used for the feedback path, and a 9 b segmented digital PWM is used for power stage control. A regulated output voltage accuracy of 1% and maximum efficiency of 94% is achieved with less than 14 mVpp ripple and a settling time of 100 μs for a 300 mA load transient.
Efficient Bi-Directional Digital Communication Scheme for Isolated Switch Mode Power Converters. An efficient high-speed bi-directional data transmission scheme for isolated AC-DC and DC-DC switched mode power converters is presented. The bi-directional scheme supports fast, efficient and reliable transmission of digitally encoded data across the isolation barrier and enables primary side control, allowing effective start-up and a simple interface to system controllers. Another key feature is...
Merged Two-Stage Power Converter With Soft Charging Switched-Capacitor Stage in 180 nm CMOS In this paper, we introduce a merged two-stage dc-dc power converter for low-voltage power delivery. By separating the transformation and regulation function of a dc-dc power converter into two stages, both large voltage transformation and high switching frequency can be achieved. We show how the switched-capacitor stage can operate under soft charging conditions by suitable control and integration (merging) of the two stages. This mode of operation enables improved efficiency and/or power density in the switched-capacitor stage. A 5-to-1 V, 0.8 W integrated dc-dc converter has been developed in 180 nm CMOS. The converter achieves a peak efficiency of 81%, with a regulation stage switching frequency of 10 MHz.
Digital 2-/3-Phase Switched-Capacitor Converter With Ripple Reduction and Efficiency Improvement. This paper presents a digitally controlled 2-/3-phase 6-ratio switched-capacitor (SC) dc-dc converter with low output voltage ripple and high efficiency. To achieve wide input and output voltage ranges, six voltage conversion ratios are generated with only two discrete flying capacitors by using both 2and 3-phase operations. An adaptive ripple reduction scheme is proposed to achieve up to four tim...
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Emergence of Intelligent Enterprises: From CPS to CPSS When IEEE Intelligent Systems solicited ideas for a new department, cyberphysical systems(CPS) received overwhelming support.Cyber-Physical-Social Systems is the new name for CPS. CPSS is the enabling platform technology that will lead us to an era of intelligent enterprises and industries. Internet use and cyberspace activities have created an overwhelming demand for the rapid development and application of CPSS. CPSS must be conducted with a multidisciplinary approach involving the physical, social, and cognitive sciences and that Al-based intelligent systems will be key to any successful construction and deployment.
Networks of spiking neurons: the third generation of neural network models The computational power of formal models for networks of spiking neurons is compared with that of other neural network models based on McCulloch Pitts neurons (i.e., threshold gates), respectively, sigmoidal gates. In particular it is shown that networks of spiking neurons are, with regard to the number of neurons that are needed, computationally more powerful than these other neural network models. A concrete biologically relevant function is exhibited which can be computed by a single spiking neuron (for biologically reasonable values of its parameters), but which requires hundreds of hidden units on a sigmoidal neural net. On the other hand, it is known that any function that can be computed by a small sigmoidal neural net can also be computed by a small network of spiking neurons. This article does not assume prior knowledge about spiking neurons, and it contains an extensive list of references to the currently available literature on computations in networks of spiking neurons and relevant results from neurobiology.
Distributed computation in dynamic networks In this paper we investigate distributed computation in dynamic networks in which the network topology changes from round to round. We consider a worst-case model in which the communication links for each round are chosen by an adversary, and nodes do not know who their neighbors for the current round are before they broadcast their messages. The model captures mobile networks and wireless networks, in which mobility and interference render communication unpredictable. In contrast to much of the existing work on dynamic networks, we do not assume that the network eventually stops changing; we require correctness and termination even in networks that change continually. We introduce a stability property called T -interval connectivity (for T = 1), which stipulates that for every T consecutive rounds there exists a stable connected spanning subgraph. For T = 1 this means that the graph is connected in every round, but changes arbitrarily between rounds. We show that in 1-interval connected graphs it is possible for nodes to determine the size of the network and compute any com- putable function of their initial inputs in O(n2) rounds using messages of size O(log n + d), where d is the size of the input to a single node. Further, if the graph is T-interval connected for T 1, the computation can be sped up by a factor of T, and any function can be computed in O(n + n2/T) rounds using messages of size O(log n + d). We also give two lower bounds on the token dissemination problem, which requires the nodes to disseminate k pieces of information to all the nodes in the network. The T-interval connected dynamic graph model is a novel model, which we believe opens new avenues for research in the theory of distributed computing in wireless, mobile and dynamic networks.
Joint mismatch and channel compensation for high-speed OFDM receivers with time-interleaved ADCs Analog-to-digital converters (ADCs) with high sampling rates and output resolution are required for the design of mostly digital transceivers in emerging multi-Gigabit communication systems. A promising approach is to use a time-interleaved (TI) architecture with slower sub-ADCs in parallel, but mismatch among the sub-ADCs, if left uncompensated, can cause error floors in receiver performance. Conventional mismatch compensation schemes typically have complexity (in terms of number of multiplications) that increases with the desired resolution at the output of the TI-ADC. In this paper, we investigate an alternative approach, in which mismatch and channel dispersion are compensated jointly, with the performance metric being overall link reliability rather than ADC performance. For an OFDM system, we characterize the structure of mismatch-induced interference, and demonstrate the efficacy of a frequency-domain interference suppression scheme whose complexity is independent of constellation size (which determines the desired resolution). Numerical results from computer simulation and from experiments on a hardware prototype show that the performance with the proposed joint mismatch and channel compensation technique is close to that without mismatch. While the proposed technique works with offline estimates of mismatch parameters, we provide an iterative, online method for joint estimation of mismatch and channel parameters which leverages the training overhead already available in communication signals.
A Dht-Based Discovery Service For The Internet Of Things Current trends towards the Future Internet are envisaging the conception of novel services endowed with context-aware and autonomic capabilities to improve end users' quality of life. The Internet of Things paradigm is expected to contribute towards this ambitious vision by proposing models and mechanisms enabling the creation of networks of "smart things" on a large scale. It is widely recognized that efficient mechanisms for discovering available resources and capabilities are required to realize such vision. The contribution of this work consists in a novel discovery service for the Internet of Things. The proposed solution adopts a peer-to-peer approach for guaranteeing scalability, robustness, and easy maintenance of the overall system. While most existing peer-to-peer discovery services proposed for the IoT support solely exact match queries on a single attribute (i. e., the object identifier), our solution can handle multiattribute and range queries. We defined a layered approach by distinguishing three main aspects: multiattribute indexing, range query support, peer-to-peer routing. We chose to adopt an over-DHT indexing scheme to guarantee ease of design and implementation principles. We report on the implementation of a Proof of Concept in a dangerous goods monitoring scenario, and, finally, we discuss test results for structural properties and query performance evaluation.
ΣΔ ADC with fractional sample rate conversion for software defined radio receiver.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.11
0.12
0.12
0.12
0.1
0.0125
0
0
0
0
0
0
0
0
Master Data Quality Barriers: An Empirical Investigation Purpose - The development of IT has enabled organizations to collect and store many times more data than they were able to just decades ago. This means that companies are now faced with managing huge amounts of data, which represents new challenges in ensuring high data quality. The purpose of this paper is to identify barriers to obtaining high master data quality.Design/methodology/approach - This paper defines relevant master data quality barriers and investigates their mutual importance through organizing data quality barriers identified in literature into a framework for analysis of data quality. The importance of the different classes of data quality barriers is investigated by a large questionnaire study, including answers from 787 Danish manufacturing companies.Findings - Based on a literature review, the paper identifies 12 master data quality barriers. The relevance and completeness of this classification is investigated by a large questionnaire study, which also clarifies the mutual importance of the defined barriers and the differences in importance in small, medium, and large companies.Research limitations/implications - The defined classification of data quality barriers provides a point of departure for future research by pointing to relevant areas for investigation of data quality problems. The limitations of the study are that it focuses only on manufacturing companies and master data (i.e. not transaction data).Practical implications - The classification of data quality barriers can give companies increased awareness of why they experience data quality problems. In addition, the paper suggests giving primary focus to organizational issues rather than perceiving poor data quality as an IT problem.Originality/value - Compared to extant classifications of data quality barriers, the contribution of this paper represents a more detailed and complete picture of what the barriers are in relation to data quality. Furthermore, the presented classification has been investigated by a large questionnaire study, for which reason it is founded on a more solid empirical basis than existing classifications.
Software complexity measurement Inappropriate use of software complexity measures can have large, damaging effects by rewarding poor programming practices and demoralizing good programmers. Software complexity measures must be critically evaluated to determine the ways in which they can best be used.
Standards for XML and Web Services Security XML schemas convey the data syntax and semantics for various application domains, such as business-to-business transactions, medical records, and production status reports. However, these schemas seldom address security issues, which can lead to a worst-case scenario of systems and protocols with no security at all. At best, they confine security to transport level mechanisms such as secure sockets layer (SSL). On the other hand, the omission of security provisions from domain schemas opens the way for generic security specifications based on XML document and grammar extensions. These specifications are orthogonal to domain schemas but integrate with them to support a variety of security objectives, such as confidentiality, integrity, and access control. In 2002, several specifications progressed toward providing a comprehensive standards framework for secure XML-based applications. The paper shows some of the most important specifications, the issues they address, and their dependencies.
Architecture and design of adaptive object-models Many object-oriented information systems share an architectural style that emphasizes flexibility and run-time adaptability. Business rules are stored externally to the program such as in a database or XML files instead of in code. The object model that the user cares about is part of the database, and the object model of the code is just an interpreter of the users' object model. We call these systems "Adaptive Object-Models", because the users' object model is interpreted at runtime and can be changed with immediate (but controlled) effects on the system interpreting it. The real power in Adaptive Object-Models is that they have a definition of a domain model and rules for its integrity and can be configured by domain experts external to the execution of the program. This paper describes the Adaptive Object-Model architecture along with its strengths and weaknesses. It illustrates the Adaptive Object-Model architectural style by describing a framework for Medical Observations (following Fowler's Analysis Patterns) that we built.
Normal accidents: Data quality problems in ERP-enabled manufacturing The efficient operation of Enterprise Resource Planning (ERP) systems largely depends on data quality. ERP can improve data quality and information sharing within an organization. It can also pose challenges to data quality. While it is well known that data quality is important in ERP systems, most existing research has focused on identifying the factors affecting the implementation and the business values of ERP. With normal accident theory as a theoretical lens, we examine data quality problems in ERP using a case study of a large, fast-growing multinational manufacturer headquartered in China. Our findings show that organizations that have successfully implemented ERP can still experience certain data quality problems. We identify major data quality problems in data production, storage and maintenance, and utilization processes. We also analyze the causes of these data quality problems by linking them to certain characteristics of ERP systems within an organizational context. Our analysis shows that problems resulting from the tight coupling effects and the complexity of ERP-enabled manufacturing systems can be inevitable. This study will help researchers and practitioners formulate data management strategies that are effective in the presence of certain “normal” data quality problems.
A Framework for Considering Comprehensibility in Modeling. Comprehensibility in modeling is the ability of stakeholders to understand relevant aspects of the modeling process. In this article, we provide a framework to help guide exploration of the space of comprehensibility challenges. We consider facets organized around key questions: Who is comprehending? Why are they trying to comprehend? Where in the process are they trying to comprehend? How can we help them comprehend? How do we measure their comprehension? With each facet we consider the broad range of options. We discuss why taking a broad view of comprehensibility in modeling is useful in identifying challenges and opportunities for solutions.
Dependency-preserving normalization of relational and XML data Having a database design that avoids redundant information and update anomalies is the main goal of normalization techniques. Ideally, data as well as constraints should be preserved. However, this is not always achievable: while BCNF eliminates all redundancies, it may not preserve constraints, and 3NF, which achieves dependency preservation, may not always eliminate all redundancies. Our first goal is to investigate how much redundancy 3NF tolerates in order to achieve dependency preservation. We apply an information-theoretic measure and show that only prime attributes admit redundant information in 3NF, but their information content may be arbitrarily low. Then we study the possibility of achieving both redundancy elimination and dependency preservation by a hierarchical representation of relational data in XML. We provide a characterization of cases when an XML normal form called XNF guarantees both. Finally, we deal with dependency preservation in XML and show that like in the relational case, normalizing XML documents to achieve non-redundant data can result in losing constraints. By modifying the definition of XNF, we define another normal form for XML documents, X3NF, that generalizes 3NF for the case of XML and achieves dependency preservation.
Differential Power Analysis . Cryptosystem designers frequently assume that secrets willbe manipulated in closed, reliable computing environments. Unfortunately,actual computers and microchips leak information about the operationsthey process. This paper examines specific methods for analyzingpower consumption measurements to find secret keys from tamperresistant devices. We also discuss approaches for building cryptosystemsthat can operate securely in existing hardware that leaks information.Keywords:...
Searching in an unknown environment: an optimal randomized algorithm for the cow-path problem Searching for a goal is a central and extensively studied problem in computer science. In classical searching problems, the cost of a search function is simply the number of queries made to an oracle that knows the position of the goal. In many robotics problems, as well as in problems from other areas, we want to charge a cost proportional to the distance between queries (e.g., the time required to travel between two query points). With this cost function in mind, the abstract problem known as the w -lane cow-path problem was designed. There are known optimal deterministic algorithms for the cow-path problem; we give the first randomized algorithm in this paper. We show that our algorithm is optimal for two paths ( w =2) and give evidence that it is optimal for larger values of w . Subsequent to the preliminary version of this paper, Kao et al. ( in “Proceedings, 5th ACM–SIAM Symposium on Discrete Algorithm,” pp. 372–381, 1994) have shown that our algorithm is indeed optimal for all w ⩾2. Our randomized algorithm gives expected performance that is almost twice as good as is possible with a deterministic algorithm. For the performance of our algorithm, we also derive the asymptotic growth with respect to w —despite similar complexity results for related problems, it appears that this growth has never been analyzed.
Adaptive Synchronization of an Uncertain Complex Dynamical Network This brief paper further investigates the locally and globally adaptive synchronization of an uncertain complex dynamical network. Several network synchronization criteria are deduced. Especially, our hypotheses and designed adaptive controllers for network synchronization are rather simple in form. It is very useful for future practical engineering design. Moreover, numerical simulations are also given to show the effectiveness of our synchronization approaches.
Practical Mitigations for Timing-Based Side-Channel Attacks on Modern x86 Processors This paper studies and evaluates the extent to which automated compiler techniques can defend against timing-based side-channel attacks on modern x86 processors. We study how modern x86 processors can leak timing information through side-channels that relate to control flow and data flow. To eliminate key-dependent control flow and key-dependent timing behavior related to control flow, we propose the use of if-conversion in a compiler backend, and evaluate a proof-of-concept prototype implementation. Furthermore, we demonstrate two ways in which programs that lack key-dependent control flow and key-dependent cache behavior can still leak timing information on modern x86 implementations such as the Intel Core 2 Duo, and propose defense mechanisms against them.
Fully Integrated CMOS Power Amplifier With Efficiency Enhancement at Power Back-Off This paper presents a new approach for power amplifier design using deep submicron CMOS technologies. A transformer based voltage combiner is proposed to combine power generated from several low-voltage CMOS amplifiers. Unlike other voltage combining transformers, the architecture presented in this paper provides greater flexibility to access and control the individual amplifiers in a voltage comb...
Understanding the regenerative comparator circuit The regenerative comparator circuit which lies at the heart of A/D conversion, slicer circuits, and memory sensing, is unstable, time-varying, nonlinear, and with multiple equilibria. That does not mean, as this paper shows, that it cannot be understood with simple equivalent circuits that reveal its dynamics completely, and enable it to be designed to specifications on static and dynamic offset and noise. The analysis is applied to the StrongArm latch.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.2
0.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
Drivers, barriers and critical success factors for ERPII implementation in supply chains: A critical analysis This paper reviews existing literature to determine the drivers of and barriers to Enterprise Resource Planning II (ERPII) implementation. The ERPII literature is then extended through interviews with potential players in ERPII implementations to identify the critical success factors (CSFs) or preconditions required for successful implementation throughout supply chains. These interviews were conducted with leading ERP vendors/consultants and organisations involved in the entire supply chain to gather evidence on the success, or lack thereof, of ERPII implementations. The results were compared and contrasted to existing literature on ERPII, collaborative networks, and the extended enterprise. We found more barriers to than drivers of successful ERPII implementation. This leads prospective implementers to have a pessimistic forecast for ERPII implementation success. Our research reveals that main reason for this negativity is a general lack of understanding and appreciation of the capabilities of the extended enterprise network. Second, the research presents two sets of CSFs: CSFs which apply to traditional ERP and carry forward to apply to ERPII, and CSFs that are tailored to the new needs for successful ERPII implementations. Finally, the research questions the suitability of ERPII in today's modern business environment, and suggests that technology may have overtaken management's capabilities to capture the full benefits of such an advanced enterprise system. Future trends in ERPII development are also considered in an attempt to find the next phase in the enterprise system life cycle. Beyond ERPII, the research suggests that infrastructure such as large-scale business intelligence (BI) systems must be heavily incorporated into modern enterprise systems to fully understand how information flows throughout an organisation and to make sense of that information.
Software complexity measurement Inappropriate use of software complexity measures can have large, damaging effects by rewarding poor programming practices and demoralizing good programmers. Software complexity measures must be critically evaluated to determine the ways in which they can best be used.
Standards for XML and Web Services Security XML schemas convey the data syntax and semantics for various application domains, such as business-to-business transactions, medical records, and production status reports. However, these schemas seldom address security issues, which can lead to a worst-case scenario of systems and protocols with no security at all. At best, they confine security to transport level mechanisms such as secure sockets layer (SSL). On the other hand, the omission of security provisions from domain schemas opens the way for generic security specifications based on XML document and grammar extensions. These specifications are orthogonal to domain schemas but integrate with them to support a variety of security objectives, such as confidentiality, integrity, and access control. In 2002, several specifications progressed toward providing a comprehensive standards framework for secure XML-based applications. The paper shows some of the most important specifications, the issues they address, and their dependencies.
Architecture and design of adaptive object-models Many object-oriented information systems share an architectural style that emphasizes flexibility and run-time adaptability. Business rules are stored externally to the program such as in a database or XML files instead of in code. The object model that the user cares about is part of the database, and the object model of the code is just an interpreter of the users' object model. We call these systems "Adaptive Object-Models", because the users' object model is interpreted at runtime and can be changed with immediate (but controlled) effects on the system interpreting it. The real power in Adaptive Object-Models is that they have a definition of a domain model and rules for its integrity and can be configured by domain experts external to the execution of the program. This paper describes the Adaptive Object-Model architecture along with its strengths and weaknesses. It illustrates the Adaptive Object-Model architectural style by describing a framework for Medical Observations (following Fowler's Analysis Patterns) that we built.
Concurrent Data Materialization For Xml-Enabled Database With Semantic Metadata For a company with many databases in different data models, it is necessary to consolidate them into one interchangeable data model and present data in more than one data model concurrently to different users or individual users who need to access the data in more than one data model. The benefit is to let the user stick to his/her own data model to access database in another data model. This paper presents a semantic metadata to preserve database constraints for data materialization to support the user's view of database on an ad hoc basis. The semantic metadata can store the captured semantics of a relational or an XML-enabled database into classes. The stored constraints and data can be materialized into a target database upon user request. The user is allowed to perform data materialization many times alternatively. The process can provide a relational as well as an XML view to the users simultaneously. This concurrent data materialization function can be applied into data warehouse to consolidate heterogeneous database into a fact table in a data model of user's choice. Furthermore, a user can obtain either a relational view or an XML view of the same dataset of an XML-enabled database interchangeably.
A Framework for Considering Comprehensibility in Modeling. Comprehensibility in modeling is the ability of stakeholders to understand relevant aspects of the modeling process. In this article, we provide a framework to help guide exploration of the space of comprehensibility challenges. We consider facets organized around key questions: Who is comprehending? Why are they trying to comprehend? Where in the process are they trying to comprehend? How can we help them comprehend? How do we measure their comprehension? With each facet we consider the broad range of options. We discuss why taking a broad view of comprehensibility in modeling is useful in identifying challenges and opportunities for solutions.
Dependency-preserving normalization of relational and XML data Having a database design that avoids redundant information and update anomalies is the main goal of normalization techniques. Ideally, data as well as constraints should be preserved. However, this is not always achievable: while BCNF eliminates all redundancies, it may not preserve constraints, and 3NF, which achieves dependency preservation, may not always eliminate all redundancies. Our first goal is to investigate how much redundancy 3NF tolerates in order to achieve dependency preservation. We apply an information-theoretic measure and show that only prime attributes admit redundant information in 3NF, but their information content may be arbitrarily low. Then we study the possibility of achieving both redundancy elimination and dependency preservation by a hierarchical representation of relational data in XML. We provide a characterization of cases when an XML normal form called XNF guarantees both. Finally, we deal with dependency preservation in XML and show that like in the relational case, normalizing XML documents to achieve non-redundant data can result in losing constraints. By modifying the definition of XNF, we define another normal form for XML documents, X3NF, that generalizes 3NF for the case of XML and achieves dependency preservation.
Differential Power Analysis . Cryptosystem designers frequently assume that secrets willbe manipulated in closed, reliable computing environments. Unfortunately,actual computers and microchips leak information about the operationsthey process. This paper examines specific methods for analyzingpower consumption measurements to find secret keys from tamperresistant devices. We also discuss approaches for building cryptosystemsthat can operate securely in existing hardware that leaks information.Keywords:...
Searching in an unknown environment: an optimal randomized algorithm for the cow-path problem Searching for a goal is a central and extensively studied problem in computer science. In classical searching problems, the cost of a search function is simply the number of queries made to an oracle that knows the position of the goal. In many robotics problems, as well as in problems from other areas, we want to charge a cost proportional to the distance between queries (e.g., the time required to travel between two query points). With this cost function in mind, the abstract problem known as the w -lane cow-path problem was designed. There are known optimal deterministic algorithms for the cow-path problem; we give the first randomized algorithm in this paper. We show that our algorithm is optimal for two paths ( w =2) and give evidence that it is optimal for larger values of w . Subsequent to the preliminary version of this paper, Kao et al. ( in “Proceedings, 5th ACM–SIAM Symposium on Discrete Algorithm,” pp. 372–381, 1994) have shown that our algorithm is indeed optimal for all w ⩾2. Our randomized algorithm gives expected performance that is almost twice as good as is possible with a deterministic algorithm. For the performance of our algorithm, we also derive the asymptotic growth with respect to w —despite similar complexity results for related problems, it appears that this growth has never been analyzed.
Adaptive Synchronization of an Uncertain Complex Dynamical Network This brief paper further investigates the locally and globally adaptive synchronization of an uncertain complex dynamical network. Several network synchronization criteria are deduced. Especially, our hypotheses and designed adaptive controllers for network synchronization are rather simple in form. It is very useful for future practical engineering design. Moreover, numerical simulations are also given to show the effectiveness of our synchronization approaches.
Practical Mitigations for Timing-Based Side-Channel Attacks on Modern x86 Processors This paper studies and evaluates the extent to which automated compiler techniques can defend against timing-based side-channel attacks on modern x86 processors. We study how modern x86 processors can leak timing information through side-channels that relate to control flow and data flow. To eliminate key-dependent control flow and key-dependent timing behavior related to control flow, we propose the use of if-conversion in a compiler backend, and evaluate a proof-of-concept prototype implementation. Furthermore, we demonstrate two ways in which programs that lack key-dependent control flow and key-dependent cache behavior can still leak timing information on modern x86 implementations such as the Intel Core 2 Duo, and propose defense mechanisms against them.
Fully Integrated CMOS Power Amplifier With Efficiency Enhancement at Power Back-Off This paper presents a new approach for power amplifier design using deep submicron CMOS technologies. A transformer based voltage combiner is proposed to combine power generated from several low-voltage CMOS amplifiers. Unlike other voltage combining transformers, the architecture presented in this paper provides greater flexibility to access and control the individual amplifiers in a voltage comb...
Understanding the regenerative comparator circuit The regenerative comparator circuit which lies at the heart of A/D conversion, slicer circuits, and memory sensing, is unstable, time-varying, nonlinear, and with multiple equilibria. That does not mean, as this paper shows, that it cannot be understood with simple equivalent circuits that reveal its dynamics completely, and enable it to be designed to specifications on static and dynamic offset and noise. The analysis is applied to the StrongArm latch.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.2
0.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
An Inverse-Class-F CMOS Oscillator With Intrinsic-High-Q First Harmonic and Second Harmonic Resonances. This paper details the theory and implementation of an inverse-class-F (class-F-1) CMOS oscillator. It features: 1) a single-ended PMOS-NMOS-complementary architecture to generate the differential outputs and 2) a transformer-based two-port resonator to boost the drain-to-gate voltage gain (AV) while creating two intrinsic-high-Q impedance peaks at the fundamental ( fLO) and double (2 fLO) oscilla...
Implicit Common-Mode Resonance in LC Oscillators. The performance of a differential LC oscillator can be enhanced by resonating the common mode of the circuit at twice the oscillation frequency. When this technique is correctly employed, Q-degradation due to the triode operation of the differential pair is eliminated and flicker noise is nulled. Until recently, one or more tail inductors have been used to achieve this common-mode resonance. In th...
A Noise Circulating Oscillator This paper presents a noise circulating cross-coupled voltage-controlled oscillator (VCO) topology with a transformer-based tank. The introduced noise circulating active core greatly suppresses the effective noise power from the active devices while offering the same amount of negative resistance compared to conventional cross-coupled VCO topologies. The mechanism of noise circulation is investigated with theoretical analysis and further verified by simulation. Due to the broadband nature of the noise circulating technique, the resulting VCO phase noise in both 1/ <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$f^{2}$ </tex-math></inline-formula> and 1/ <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$f^{3}$ </tex-math></inline-formula> regions is greatly improved over a wide frequency tuning range. A prototype VCO at 2.35 GHz is implemented in a standard 130-nm bulk CMOS process with 0.36-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> core area. It draws 2.15 mA from a 1.2-V supply. The measured figure-of-merit (FoM) is 193.1/195.0/195.6 dBc/Hz at 100k/1M/10MHz offsets with a 1/ <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$f^{3}$ </tex-math></inline-formula> phase noise corner of only 50 kHz. The VCO design consistently achieves >192.8 dBc/Hz FoM at 100k/1M/10MHz offsets and <60 kHz 1/ <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$f^{3}$ </tex-math></inline-formula> phase noise corner over its entire 18.6% frequency tuning range (2.05 − 2.47 GHz). It also exhibits low supply frequency pushing of −25 and −13 MHz/V at the highest and lowest frequencies, respectively.
A General Theory of Injection Locking and Pulling in Electrical Oscillators—Part I: Time-Synchronous Modeling and Injection Waveform Design A general model of electrical oscillators under the influence of a periodic injection is presented. Stemming solely from the autonomy and periodic time variance inherent in all oscillators, the model’s underlying approach makes no assumptions about the topology of the oscillator or the shape of the injection waveform. A single first-order differential equation is shown to be capable of predicting a number of important properties, including the lock range, the relative phase of an injection-locked oscillator, and mode stability. The framework also reveals how the injection waveform can be designed to optimize the lock range. A diverse collection of simulations and measurements, performed on various types of oscillators, serve to verify the proposed theory.
A 3.36-GHz Locking-Tuned Type-I Sampling PLL With −78.6-dBc Reference Spur Merging Single-Path Reference-Feedthrough-Suppression and Narrow-Pulse-Shielding Techniques This brief describes a type-I analog sampling phase-locked loop (S-PLL) featuring reference-feedthrough-suppression and narrow-pulse-shielding techniques in a single path to improve the reference (REF) spur. Specifically, we realize the former by inserting a T-shape switch with one center-tap ground, while the latter tackles the voltage ripple caused by the sampling non-idealities. Also, we can tu...
An Ultra-Low Phase Noise Class-F 2 CMOS Oscillator With 191 dBc/Hz FoM and Long-Term Reliability In this paper, we propose a new class of operation of an RF oscillator that minimizes its phase noise. The main idea is to enforce a clipped voltage waveform around the LC tank by increasing the second-harmonic of fundamental oscillation voltage through an additional impedance peak, thus giving rise to a class-F 2 operation. As a result, the noise contribution of the tail current transistor on the total phase noise can be significantly decreased without sacrificing the oscillator's voltage and current efficiencies. Furthermore, its special impulse sensitivity function (ISF) reduces the phase sensitivity to thermal circuit noise. The prototype of the class-F 2 oscillator is implemented in standard TSMC 65 nm CMOS occupying 0.2 mm 2 . It draws 32-38 mA from 1.3 V supply. Its tuning range is 19% covering 7.2-8.8 GHz. It exhibits phase noise of -139 dBc/Hz at 3 MHz offset from 8.7 GHz carrier, translated to an average figure-of-merit of 191 dBc/Hz with less than 2 dB variation across the tuning range. The long term reliability is also investigated with estimated >10 year lifetime.
1-5.6 Gb/s CMOS clock and data recovery IC with a static phase offset compensated linear phase detector This study presents a 1-5.6 Gb/s CMOS clock and data recovery (CDR) integrated circuit (IC) implemented in a 0.13 μm CMOS process. The CDR uses a half-rate linear phase detector (PD) of which static phase offset is compensated by an additional binary PD and a digital charge pump (CP) calibration block. During initialisation, the static phase offset is detected by the binary PD and the CP current is controlled accordingly to compensate the static phase offset. Also, the architecture of this CDR IC is designed for a clock embedded serial data interface which transfers CDR training clock patterns before normal random data signals. The implemented IC consumes 16-22 mA from a 1.2 V core supply for data rates of 1-5.6 Gb/s and 20 mA from a 3.3 V I/O supply for two preamplifiers and low-voltage differential signalling drivers. When the 231-1 pseudorandom binary sequence is used, the measured bit-error rate is better than 10-12 and the jitter tolerance is 0.3UIpp. The recovered clock jitter is 21.6 and 4.2 psrms for 1 and 5.6 Gb/s data rates, respectively.
Low-power area-efficient high-speed I/O circuit techniques We present a 4-Gb/s I/O circuit that fits in 0.1-mm/sup 2/ of die area, dissipates 90 mW of power, and operates over 1 m of 7-mil 0.5-oz PCB trace in a 0.25-/spl mu/m CMOS technology. Swing reduction is used in an input-multiplexed transmitter to provide most of the speed advantage of an output-multiplexed architecture with significantly lower power and area. A delay-locked loop (DLL) using a supply-regulated inverter delay line gives very low jitter at a fraction of the power of a source-coupled delay line-based DLL. Receiver capacitive offset trimming decreases the minimum resolvable swing to 8 mV, greatly reducing the transmission energy without affecting the performance of the receive amplifier. These circuit techniques enable a high level of I/O integration to relieve the pin bandwidth bottleneck of modern VLSI chips.
Long short-term memory. Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.
Dynamic spectrum access in open spectrum wireless networks One of the reasons for the limitation of bandwidth in current generation wireless networks is the spectrum policy of the Federal Communications Commission (FCC). But, with the spectrum policy reform, open spectrum wireless networks, and spectrum agile radios are set to drive next general wireless networks. In this paper, we investigate continuous-time Markov models for dynamic spectrum access in open spectrum wireless networks. Both queueing and no queueing cases are considered. Analytical results are derived based on the Markov models. A random access protocol is proposed that is shown to achieve airtime fairness. A distributed version of this protocol that uses only local information is also proposed based on homo egualis anthropological model. Inequality aversion by the radio systems to achieve fairness is captured by this model. These protocols are then extended to spectrum agile radios. Extensive simulation results are presented to compare the performances of fixed versus agile radios.
The rainbow skip graph: a fault-tolerant constant-degree distributed data structure We present a distributed data structure, which we call the rainbow skip graph. To our knowledge, this is the first peer-to-peer data structure that simultaneously achieves high fault-tolerance, constant-sized nodes, and fast update and query times for ordered data. It is a non-trivial adaptation of the SkipNet/skip-graph structures of Harvey et al. and Aspnes and Shah, so as to provide fault-tolerance as these structures do, but to do so using constant-sized nodes, as in the family tree structure of Zatloukal and Harvey. It supports successor queries on a set of n items using O(log n) messages with high probability, an improvement over the expected O(log n) messages of the family tree. Our structure achieves these results by using the following new constructs:• Rainbow connections: parallel sets of pointers between related components of nodes, so as to achieve good connectivity between "adjacent" components, using constant-sized nodes.• Hydra components: highly-connected, highly fault-tolerant components of constant-sized nodes, which will contain relatively large connected subcomponents even under the failure of a constant fraction of the nodes in the component.We further augment the hydra components in the rainbow skip graph by using erasure-resilient codes to ensure that any large subcomponent of nodes in a hydra component is sufficient to reconstruct all the data stored in that component. By carefully maintaining the size of related components and hydra components to be O(log n), we are able to achieve fast times for updates and queries in the rainbow skip graph. In addition, we show how to make the communication complexity for updates and queries be worst case, at the expense of more conceptual complexity and a slight degradation in the node congestion of the data structure.
A 52 <formula formulatype="inline"><tex Notation="TeX">$\mu$</tex> </formula>W Wake-Up Receiver With <formula formulatype="inline"><tex Notation="TeX">$-$</tex> </formula>72 dBm Sensitivity Using an Uncertain-IF Architecture A dedicated wake-up receiver may be used in wireless sensor nodes to control duty cycle and reduce network latency. However, its power dissipation must be extremely low to minimize the power consumption of the overall link. This paper describes the design of a 2 GHz receiver using a novel ldquouncertain-IFrdquo architecture, which combines MEMS-based high-Q filtering and a free-running CMOS ring o...
Formal Analysis of Leader Election in MANETs Using Real-Time Maude.
A 232-1996-kS/s Robust Compressive Sensing Reconstruction Engine for Real-Time Physiological Signals Monitoring. Compressive sensing (CS) techniques enable new reduced-complexity designs for sensor nodes and help reduce overall transmission power in wireless sensor network. However, for real-time physiological signals monitoring, the orthogonal matching pursuit that applied prior CS reconstruction chip designs is sensitive to measurement noise and suffers from a low convergence rate. In this paper, we presen...
1.11
0.11
0.11
0.1
0.06
0.044
0.01
0.0025
0
0
0
0
0
0
Secure Model Checkers for Network-on-Chip (NoC) Architectures. As chip multiprocessors (CMPs) are becoming more susceptible to process variation, crosstalk, and hard and soft errors, emerging threats from rogue employees in a compromised foundry are creating new vulnerabilities that could undermine the integrity of our chips with malicious alterations. As the Network-on-Chip (NoC) is a focal point of sensitive data transfer and critical device coordination, there is an urgent demand for secure and reliable communication. In this paper we propose Secure Model Checkers (SMCs), a real-time solution for control logic verification and functional correctness in the micro-architecture to detect Hardware Trojan (HT) induced denial-of-service attacks and improve reliability. In our evaluation, we show that SMCs provides significant security enhancements in real-time with only 1.5% power and 1.1% area overhead penalty in the micro-architecture.
Energy Efficient Run-Time Incremental Mapping for 3-D Networks-on-Chip 3-D Networks-on-Chip(NoC) emerge as a potent solution to address both the interconnection and design complexity problems facing future Multiprocessor System-on-Chips(MPSoCs).Effective run-time mapping on such 3-D NoC-based MPSoCs can be quite challenging,as the arrival order and task graphs of the target applications are typically not known a priori,which can be further complicated by stringent energy requirements for NoC systems.This paper thus presents an energy-aware run-time incremental mapping algorithm(ERIM) for 3-D NoC which can minimize the energy consumption due to the data communications among processor cores,while reducing the fragmentation effect on the incoming applications to be mapped,and simultaneously satisfying the thermal constraints imposed on each incoming application.Specifically,incoming applications are mapped to cuboid tile regions for lower energy consumption of communication and the minimal routing.Fragment tiles due to system fragmentation can be gleaned for better resource utilization.Extensive experiments have been conducted to evaluate the performance of the proposed algorithm ERIM,and the results are compared against the optimal mapping algorithm(branch-and-bound) and two heuristic algorithms(TB and TL).The experiments show that ERIM outperforms TB and TL methods with significant energy saving(more than 10%),much reduced average response time,and improved system utilization.
A Security Framework for NoC Using Authenticated Encryption and Session Keys Abstract Network on Chip (NoC) is an emerging solution to the existing scalability problems with System on Chip (SoC). However, it is exposed to security threats like extraction of secret information from IP cores. In this paper we present an Authenticated Encryption (AE)-based security framework for NoC based systems. The security framework resides in Network Interface (NI) of every IP core allowing secure communication among such IP cores. The secure cores can communicate using permanent keys whereas temporary session keys are used for communication between secure and non-secure cores. A traffic limiting counter is used to prevent bandwidth denial and access rights table avoids unauthorized memory accesses. We simulated and implemented our framework using Verilog/VHDL modules on top of NoCem emulator. The results showed tolerable area overhead and did not affect the network performance apart from some initial latency.
A Survey of Timing Channels and Countermeasures. A timing channel is a communication channel that can transfer information to a receiver/decoder by modulating the timing behavior of an entity. Examples of this entity include the interpacket delays of a packet stream, the reordering packets in a packet stream, or the resource access time of a cryptographic module. Advances in the information and coding theory and the availability of high-performance computing systems interconnected by high-speed networks have spurred interest in and development of various types of timing channels. With the emergence of complex timing channels, novel detection and prevention techniques are also being developed to counter them. In this article, we provide a detailed survey of timing channels broadly categorized into network timing channel, in which communicating entities are connected by a network, and in-system timing channel, in which the communicating entities are within a computing system. This survey builds on the last comprehensive survey by Zander et al. [2007] and considers all three canonical applications of timing channels, namely, covert communication, timing side channel, and network flow watermarking. We survey the theoretical foundations, the implementation, and the various detection and prevention techniques that have been reported in literature. Based on the analysis of the current literature, we discuss potential future research directions both in the design and application of timing channels and their detection and prevention techniques.
Scratchpad memory: design alternative for cache on-chip memory in embedded systems In this paper we address the problem of on-chip memory selection for computationally intensive applications, by proposing scratch pad memory as an alternative to cache. Area and energy for different scratch pad and cache sizes are computed using the CACTI tool while performance was evaluated using the trace results of the simulator. The target processor chosen for evaluation was AT91M40400. The results clearly establish scratehpad memory as a low power alternative in most situations with an average energy reducation of 40%. Further the average area-time reduction for the seratchpad memory was 46% of the cache memory.
On-Chip Interconnection Architecture of the Tile Processor iMesh, the Tile Processor Architecture's on-chip interconnection network, connects the multicore processor's tiles with five 2D mesh networks, each specialized for a different use. Taking advantage of the five networks, the c-based iLib interconnection library efficiently maps program communication across the on-chip interconnect. The Tile Processor's first implementation, the TILE64, contains 64 cores and can execute 192 billion 32-bit operations per second at 1 GHz.
The part-time parliament Recent archaeological discoveries on the island of Paxos reveal that the parliament functioned despite the peripatetic propensity of its part-time legislators. The legislators maintained consistent copies of the parliamentary record, despite their frequent forays from the chamber and the forgetfulness of their messengers. The Paxon parliament's protocol provides a new way of implementing the state machine approach to the design of distributed systems.
Measuring the Gap Between FPGAs and ASICs ABSTRACT This paper presents experimental measurements of the differences between a 90nm CMOS FPGA and 90nm CMOS Standard Cell ASICs in terms of logic density, circuit speed and power consumption. We are motivated to make these measurements to enable system designers to make better informed choices between these two media and to give insight to FPGA makers on the deciencies to attack and thereby improve FPGAs. In the paper, we describe the methodology by which the measurements were obtained and we show that, for circuits containing only combinational logic and,ipops, the ratio of silicon area required to implement them in FPGAs and ASICs is on average 40. Modern FPGAs also contain \hard" blocks such as multiplier/accumulators and block memories,and we nd,that these blocks reduce this average area gap signican tly to as little as 21. The ratio of critical path delay, from FPGA to ASIC, is roughly 3 to 4, with less inuence from block memory and hard multipliers. The dynamic power consumption ratio is approximately 12 times and, with hard blocks, this gap generally becomes smaller. Categories and Subject Descriptors
Language-based information-flow security Current standard security practices do not provide substantial assurance that the end-to-end behavior of a computing system satisfies important security policies such as confidentiality. An end-to-end confidentiality policy might assert that secret input data cannot be inferred by an attacker through the attacker's observations of system output; this policy regulates information flow. Conventional security mechanisms such as access control and encryption do not directly address the enforcement of information-flow policies. Previously, a promising new approach has been developed: the use of programming-language techniques for specifying and enforcing information-flow policies. In this paper, we survey the past three decades of research on information-flow security, particularly focusing on work that uses static program analysis to enforce information-flow policies. We give a structured view of work in the area and identify some important open challenges.
Fundamental control algorithms in mobile networks In this work we propose simple and efficient protocols forcounting and leader election in mobile networks. For mobilenetworks with fixed base stations we provide a new andvery efficient protocol for counting the number of mobilehosts. The main part of the work concentrates on ad-hocnetworks (no fixed subnetwork). We provide a model forthese networks and leader election (and a special form ofcounting) protocols for both named and anonymous mobilehosts. In this work we define two protocol ...
Master Data Quality Barriers: An Empirical Investigation Purpose - The development of IT has enabled organizations to collect and store many times more data than they were able to just decades ago. This means that companies are now faced with managing huge amounts of data, which represents new challenges in ensuring high data quality. The purpose of this paper is to identify barriers to obtaining high master data quality.Design/methodology/approach - This paper defines relevant master data quality barriers and investigates their mutual importance through organizing data quality barriers identified in literature into a framework for analysis of data quality. The importance of the different classes of data quality barriers is investigated by a large questionnaire study, including answers from 787 Danish manufacturing companies.Findings - Based on a literature review, the paper identifies 12 master data quality barriers. The relevance and completeness of this classification is investigated by a large questionnaire study, which also clarifies the mutual importance of the defined barriers and the differences in importance in small, medium, and large companies.Research limitations/implications - The defined classification of data quality barriers provides a point of departure for future research by pointing to relevant areas for investigation of data quality problems. The limitations of the study are that it focuses only on manufacturing companies and master data (i.e. not transaction data).Practical implications - The classification of data quality barriers can give companies increased awareness of why they experience data quality problems. In addition, the paper suggests giving primary focus to organizational issues rather than perceiving poor data quality as an IT problem.Originality/value - Compared to extant classifications of data quality barriers, the contribution of this paper represents a more detailed and complete picture of what the barriers are in relation to data quality. Furthermore, the presented classification has been investigated by a large questionnaire study, for which reason it is founded on a more solid empirical basis than existing classifications.
An Opportunistic Cognitive MAC Protocol for Coexistence with WLAN In last decades, the demand of wireless spectrum has increased rapidly with the development of mobile communication services. Recent studies recognize that traditional fixed spectrum assignment does not use spectrum efficiently. Such a wasting phenomenon could be amended after the present of cognitive radio. Cognitive radio is a new type of technology that enables secondary usage to unlicensed user. This paper presents an opportunistic cognitive MAC protocol (OC-MAC) for cognitive radios to access unoccupied resource opportunistically and coexist with wireless local area network (WLAN). By a primary traffic predication model and transmission etiquette, OC-MAC avoids producing fatal damage to licensed users. Then a ns2 simulation model is developed to evaluate its performance in scenarios with coexisting WLAN and cognitive network.
Towards elastic SDR architectures using dynamic task management. SDR platforms integrating several types and numbers of processing elements in System-on-Chips become an attractive solution for baseband processing in wireless systems. In order to cope with the diversity of protocol applications and the heterogeneity of multi-core architectures, a hierarchical approach for workload distribution is proposed in this paper. Specifically, a system-level scheduler is employed to map applications to multiple processing clusters, complemented with a cluster-level scheduler - the CoreManager - for dynamic resource allocation and configuration as well as for task and data scheduling. A performance analysis of the proposed approach is presented, which shows the advantages of dynamic scheduling against a static approach for variable workloads in the LTE-Advanced uplink multi-user scenarios.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.2
0.2
0.2
0.2
0.1
0.066667
0
0
0
0
0
0
0
0
Low-Cost Online Convolution Checksum Checker Managing random hardware faults requires the faults to be detected online, thus simplifying recovery. Algorithm-based fault tolerance has been proposed as a low-cost mechanism to check online the result of computations against random hardware failures. In this case, the checksum of the actual result is checked against a predicted checksum computed in parallel by a hardware checker. In this work, we target the design of such checkers for convolution engines that are currently the most critical building block in image processing and computer vision applications. The proposed convolution checksum checker, named ConvGuard, utilizes a newly introduced invariance condition of convolution to predict <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">implicitly</i> the output checksum using only the pixels at the border of the input image. In this way, ConvGuard reduces the power required for accumulating the input pixels without requiring large buffers to hold intermediate checksum results. The design of ConvGuard is generic and can be configured for different output sizes and strides. The experimental results show that ConvGuard utilizes only a small percentage of the area/power of an efficient convolution engine while being significantly smaller and more power efficient than a state-of-the-art checksum checker for various practical cases.
High Throughput Spatial Convolution Filters on FPGAs. Digital signal processing (DSP) on field-programmable gate arrays (FPGAs) has long been appealing because of the inherent parallelism in these computations that can be easily exploited to accelerate such algorithms. FPGAs have evolved significantly to further enhance the mapping of these algorithms, included additional hard blocks, such as the DSP blocks found in modern FPGAs. Although these DSP b...
Ara: A 1-GHz+ Scalable and Energy-Efficient RISC-V Vector Processor With Multiprecision Floating-Point Support in 22-nm FD-SOI. In this article, we present Ara, a 64-bit vector processor based on the version 0.5 draft of RISC-V&#39;s vector extension, implemented in GlobalFoundries 22FDX fully depleted silicon-on-insulator (FD-SOI) technology. Ara&#39;s microarchitecture is scalable, as it is composed of a set of identical lanes, each containing part of the processor&#39;s vector register file and functional units. It achieves up to 9...
You Only Look Once: Unified, Real-Time Object Detection We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Software complexity measurement Inappropriate use of software complexity measures can have large, damaging effects by rewarding poor programming practices and demoralizing good programmers. Software complexity measures must be critically evaluated to determine the ways in which they can best be used.
Information-driven dynamic sensor collaboration This article overviews the information-driven approach to sensor collaboration in ad hoc sensor networks. The main idea is for a network to determine participants in a "sensor collaboration" by dynamically optimizing the information utility of data for a given cost of communication and computation. A definition of information utility is introduced, and several approximate measures of the information utility are developed for reasons of computational tractability. We illustrate the use of this approach using examples drawn from tracking applications
ImageNet Classification with Deep Convolutional Neural Networks. We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0%, respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.
Estimation of entropy and mutual information We present some new results on the nonparametric estimation of entropy and mutual information. First, we use an exact local expansion of the entropy function to prove almost sure consistency and central limit theorems for three of the most commonly used discretized information estimators. The setup is related to Grenander's method of sieves and places no assumptions on the underlying probability measure generating the data. Second, we prove a converse to these consistency theorems, demonstrating that a misapplication of the most common estimation techniques leads to an arbitrarily poor estimate of the true information, even given unlimited data. This "inconsistency" theorem leads to an analytical approximation of the bias, valid in surprisingly small sample regimes and more accurate than the usual 1/N formula of Miller and Madow over a large region of parameter space. The two most practical implications of these results are negative: (1) information estimates in a certain data regime are likely contaminated by bias, even if "bias-corrected" estimators are used, and (2) confidence intervals calculated by standard techniques drastically underestimate the error of the most common estimation methods.Finally, we note a very useful connection between the bias of entropy estimators and a certain polynomial approximation problem. By casting bias calculation problems in this approximation theory framework, we obtain the best possible generalization of known asymptotic bias results. More interesting, this framework leads to an estimator with some nice properties: the estimator comes equipped with rigorous bounds on the maximum error over all possible underlying probability distributions, and this maximum error turns out to be surprisingly small. We demonstrate the application of this new estimator on both real and simulated data.
Collection and Analysis of Microprocessor Design Errors Research on practical design verification techniques has long been impeded by the lack of published, detailed error data. We have systematically collected design error data over the last few years from a number of academic microprocessor design projects. We analyzed this data and report on the lessons learned in the collection effort.
The challenges of merging two similar structured overlays: a tale of two networks Structured overlay networks is an important and interesting primitive that can be used by diverse peer-to-peer applications. Multiple overlays can result either because of network partitioning or (more likely) because different groups of peers build such overlays separately before coming in contact with each other and wishing to coalesce the overlays together. This paper is a first look into how multiple such overlays (all using the same protocols) can be merged – which is critical for usability and adoption of such an internet-scale distributed system. We elaborate how two networks using the same protocols can be merged, looking specifically into two different overlay design principles: (i) maintaining the ring invariant and (ii) structural replications, either of which are used in various overlay networks to guarantee functional correctness in a highly dynamic (membership changes) environment. Particularly, we show that ring based networks can not operate until the merger operation completes. In contrast, from the perspective of individual peers in structurally replicated overlays there is no disruption of service, and they can continue to discover and access resources that they could originally do before the beginning of the merger process, even though resources from the other network become visible only gradually with the progress of the merger process.
Digital signal processors in cellular radio communications Contemporary wireless communications are based on digital communications technologies. The recent commercial success of mobile cellular communications has been enabled in part by successful designs of digital signal processors with appropriate on-chip memories and specialized accelerators for digital transceiver operations. This article provides an overview of fixed point digital signal processors and ways in which they are used in cellular communications. Directions for future wireless-focused DSP technology developments are discussed
Optimum insertion/deletion point selection for fractional sample rate conversion In this paper, an optimum insertion/deletion point selection algorithm for fractional sample rate conversion (SRC) is proposed. The direct insertion/deletion technique achieves low complexity and low power consumption as compared to the other fractional SRC methods. Using a multiple set insertion/deletion technique is efficient for reduction of distortion caused by the insertion/deletion step. When the conversion factor is (N ±¿)/N, the number of possible patterns of insertion/deletion points and the number of combinations for multiple set inserters/deleters grow as ¿ increases. The proposed algorithm minimizes the distortion due to SRC by selecting the patterns and the combinations.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
0
0
0
A distributed multiple dimensional QoS constrained resource scheduling optimization policy in computational grid This paper is to solve efficient QoS based resource scheduling in computational grid. It defines a set of QoS dimensions with utility function for each dimensions, uses a market model for distributed optimization to maximize the global utility. The user specifies its requirement by a utility function. A utility function can be specified for each QoS dimension. In the grid, grid task agent acted as consumer pay for the grid resource and resource providers get profits from task agents. The task agent' utility can then be defined as a weighted sum of single-dimensional QoS utility function. QoS based grid resource scheduling optimization is decomposed to two subproblems: joint optimization of resource user and resource provider in grid market. An iterative multiple QoS scheduling algorithm that is used to perform optimal multiple QoS based resource scheduling. The grid users propose payment for the resource providers, while the resource providers set a price for each resource. The experiments show that optimal QoS based resource scheduling involves less overhead and leads to more efficient resource allocation than no optimal resource allocation.
Multiple agent-based autonomy for satellite constellations There is an increasing desire to use constellations of au- tonomous spacecraft working together to accomplish complex mission objectives. Multiple, highly autonomous, satellite systems are envisioned because they are capable of higher performance, lower cost, better fault tolerance, recongurability and upgradability. This paper presents an ar- chitecture and multi-agent design and simulation environment that will enable agent-based multi-satellite systems to fulll their complex mis- sion objectives, termed TeamAgentTM. Its application is shown for Tech- Sat21, a U.S. Air Force mission designed to explore the benets of dis- tributed satellite systems. Required spacecraft functions, software agents, and multi-agent organisations are described for the TechSat21 mission, as well as their implementation. Agent-based simulations of TechSat21 case studies show the autonomous operation and how TeamAgent can be used to evaluate and compare multi agent-based organisations.
On a Class of Directed Graphs-With an Application to Traffic-Flow Problems Let G be a directed graph such that even edge e of G is associated with a positive integer, called the index of e. Then G is called a network graph if, at every vertex v of G, the sum of the indices of the edges terminating at v is equal to that of the edges incident from v. This paper obtains several theorems giving necessary and sufficient conditions for v directed graph to be a network graph, and applies the results to solving some problems of smoothing traffic flow.
An Inexact Dual Fast Gradient-Projection Method for Separable Convex Optimization with Linear Coupled Constraints. In this paper, a class of separable convex optimization problems with linear coupled constraints is studied. According to the Lagrangian duality, the linear coupled constraints are appended to the objective function. Then, a fast gradient-projection method is introduced to update the Lagrangian multiplier, and an inexact solution method is proposed to solve the inner problems. The advantage of our proposed method is that the inner problems can be solved in an inexact and parallel manner. The established convergence results show that our proposed algorithm still achieves optimal convergence rate even though the inner problems are solved inexactly. Finally, several numerical experiments are presented to illustrate the efficiency and effectiveness of our proposed algorithm.
Distributed Tracking of Nonlinear Multiagent Systems Under Directed Switching Topology: An Observer-Based Protocol This paper deals with a consensus tracking problem for multiagent systems (MASs) with Lipschitz-type nonlinear dynamics and directed switching topology. Unlike most existing works where the relative full state measurements of neighboring agents are utilized, it is assumed that only the relative output measurements of neighboring agents are available for coordination. To achieve consensus tracking in the considered MASs, a new class of observer-based protocols is proposed. By appropriately constructing some topology-dependent multiple Lyapunov functions, it is theoretically shown that distributed consensus tracking in the closed-loop MASs equipped with the designed protocols can be ensured if each possible topology contains a directed spanning tree rooted at the leader and the dwell time for the switchings among different topology is less than a derived positive quantity. Interestingly, it is found that the communication topology for observers’ states may be independent with that of the feedback signals. The derived results are further extended to the case of directed switching topology with only average dwell time constraints. Finally, the effectiveness of the analytical results is demonstrated via numerical simulations.
The part-time parliament Recent archaeological discoveries on the island of Paxos reveal that the parliament functioned despite the peripatetic propensity of its part-time legislators. The legislators maintained consistent copies of the parliamentary record, despite their frequent forays from the chamber and the forgetfulness of their messengers. The Paxon parliament's protocol provides a new way of implementing the state machine approach to the design of distributed systems.
GloMoSim: a library for parallel simulation of large-scale wireless networks Abstract Anumber,of library-based parallel ,and sequential network,simulators ,have ,been ,designed. This paper describes a library, called GloMoSim (for Global Mobile system Simulator), for parallel simulation of wireless networks. GloMoSim has been designed to be ,extensible and composable: the communication ,protocol stack for wireless networks is divided into a set of layers, each with its own API. Models of protocols at one layer interact with those at a lower (or higher) layer only via these APIs. The modular,implementation,enables consistent comparison,of multiple,protocols ,at a ,given ,layer. The parallel implementation,of GloMoSim ,can be executed ,using a variety of conservative synchronization protocols, which include,the ,null ,message ,and ,conditional ,event algorithms. This paper describes the GloMoSim library, addresses,a number ,of issues ,relevant ,to its parallelization, and presents a set of experimental results onthe IBM 9076 SP, a distributed memory multi- computer. These experiments use models constructed from the library modules. 1,Introduction The,rapid ,advancement ,in portable ,computing platforms and wireless communication,technology has led tosignificant interest in mobile ,computing ,and mobile networking. Two primary forms of mobile ,computing ,are becoming popular: first, mobile computers continue to heavily use wired network infrastructures.Instead of being hardwired to a single location (or IP address), a computer can,dynamically ,move ,to multiple ,locations ,while maintaining,application transparency. Protocols such as
Long short-term memory. Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.
DieHard: probabilistic memory safety for unsafe languages Applications written in unsafe languages like C and C++ are vulnerable to memory errors such as buffer overflows, dangling pointers, and reads of uninitialized data. Such errors can lead to program crashes, security vulnerabilities, and unpredictable behavior. We present DieHard, a runtime system that tolerates these errors while probabilistically maintaining soundness. DieHard uses randomization and replication to achieve probabilistic memory safety by approximating an infinite-sized heap. DieHard's memory manager randomizes the location of objects in a heap that is at least twice as large as required. This algorithm prevents heap corruption and provides a probabilistic guarantee of avoiding memory errors. For additional safety, DieHard can operate in a replicated mode where multiple replicas of the same application are run simultaneously. By initializing each replica with a different random seed and requiring agreement on output, the replicated version of Die-Hard increases the likelihood of correct execution because errors are unlikely to have the same effect across all replicas. We present analytical and experimental results that show DieHard's resilience to a wide range of memory errors, including a heap-based buffer overflow in an actual application.
Beyond Stack Smashing: Recent Advances in Exploiting Buffer Overruns This article describes three powerful general-purpose families of exploits for buffer overruns: arc injection, pointer subterfuge, and heap smashing. These new techniques go beyond the traditional "stack smashing" attack and invalidate traditional assumptions about buffer overruns.
Fuzzy regulators and fuzzy observers: relaxed stability conditions and LMI-based designs This paper presents new relaxed stability conditions and LMI- (linear matrix inequality) based designs for both continuous and discrete fuzzy control systems. They are applied to design problems of fuzzy regulators and fuzzy observers. First, Takagi and Sugeno's fuzzy models and some stability results are recalled. To design fuzzy regulators and fuzzy observers, nonlinear systems are represented by Takagi-Sugeno's (TS) fuzzy models. The concept of parallel distributed compensation is employed to design fuzzy regulators and fuzzy observers from the TS fuzzy models. New stability conditions are obtained by relaxing the stability conditions derived in previous papers, LMI-based design procedures for fuzzy regulators and fuzzy observers are constructed using the parallel distributed compensation and the relaxed stability conditions. Other LMI's with respect to decay rate and constraints on control input and output are also derived and utilized in the design procedures. Design examples for nonlinear systems demonstrate the utility of the relaxed stability conditions and the LMI-based design procedures
Practical Timing Side Channel Attacks against Kernel Space ASLR Due to the prevalence of control-flow hijacking attacks, a wide variety of defense methods to protect both user space and kernel space code have been developed in the past years. A few examples that have received widespread adoption include stack canaries, non-executable memory, and Address Space Layout Randomization (ASLR). When implemented correctly (i.e., a given system fully supports these protection methods and no information leak exists), the attack surface is significantly reduced and typical exploitation strategies are severely thwarted. All modern desktop and server operating systems support these techniques and ASLR has also been added to different mobile operating systems recently. In this paper, we study the limitations of kernel space ASLR against a local attacker with restricted privileges. We show that an adversary can implement a generic side channel attack against the memory management system to deduce information about the privileged address space layout. Our approach is based on the intrinsic property that the different caches are shared resources on computer systems. We introduce three implementations of our methodology and show that our attacks are feasible on four different x86-based CPUs (both 32- and 64-bit architectures) and also applicable to virtual machines. As a result, we can successfully circumvent kernel space ASLR on current operating systems. Furthermore, we also discuss mitigation strategies against our attacks, and propose and implement a defense solution with negligible performance overhead.
ΣΔ ADC with fractional sample rate conversion for software defined radio receiver.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.2
0.2
0.2
0.1
0.05
0
0
0
0
0
0
0
0
0
Distributed Optimal Economic Dispatch for Microgrids Considering Communication Delays This paper investigates the economic dispatch problem of microgrids in a distributed fashion. To address this issue, a delay-free-based distributed algorithm is presented to optimally assign the whole energy demand among local generation units with the objective of minimizing the agminated operation cost. By implementing the proposed algorithm, each component can find its own optimal operations by only requiring local computation and communication. As a result, it enhances the system robustness, flexibility, privacy, etc. More importantly, the time-varying delays model is considered and embedded into the design of our distributed algorithm, such that the components can employ the delays information to achieve the collaborative operation, which are more general and applicable for practical power systems. In addition, we have proved that the proposed algorithm can converge to the global optimal point under some sufficient conditions. Finally, several simulations are provided to demonstrate the correctness and effectiveness of the proposed algorithm.
Convergence rate analysis of distributed optimization with projected subgradient algorithm. In this paper, we revisit the consensus-based projected subgradient algorithm proposed for a common set constraint. We show that the commonly adopted non-summable and square-summable diminishing step sizes of subgradients can be relaxed to be only non-summable, if the constrained optimum set is bounded. More importantly, for a strongly convex aggregate cost with different types of step sizes, we provide a systematical analysis to derive the asymptotic upper bound of convergence rates in terms of the optimum residual, and select the best step sizes accordingly. Our result shows that a convergence rate of O(1∕k) can be achieved with a step size O(1∕k).
A Distributed and Scalable Processing Method Based Upon ADMM. The alternating direction multiplier method (ADMM) was originally devised as an iterative method for solving convex minimization problems by means of parallelization, and was recently used for distributed processing. This letter proposes a modification of state-of-the-art ADMM formulations in order to obtain a scalable version, well suited for a wide range of applications such as cooperative local...
Distributed Optimization With Local Domains: Applications in MPC and Network Flows We consider a network where each node has exclusive access to a local cost function. Our contribution is a communicationefficient distributed algorithm that finds a vector x ⋆ minimizing the sum of all the functions. We make the additional assumption that the functions have intersecting local domains, i.e., each function depends only on some components of the variable. Consequently, each node is interested in knowing only some components of x ⋆, not the entire vector. This allows improving communication-efficiency. We apply our algorithm to distributed model predictive control (D-MPC) and to network flow problems and show, through experiments on large networks, that the proposed algorithm requires less communications to converge than prior state-of-the-art algorithms.
Stochastic Proximal Gradient Consensus Over Random Networks. We consider solving a convex optimization problem with possibly stochastic gradient, and over a randomly time-varying multiagent network. Each agent has access to some local objective function, and it only has unbiased estimates of the gradients of the smooth component. We develop a dynamic stochastic proximal-gradient consensus algorithm, with the following key features: 1) it works for both the static and certain randomly time-varying networks; 2) it allows the agents to utilize either the exact or stochastic gradient information; 3) it is convergent with provable rate. In particular, the proposed algorithm converges to a global optimal solution, with a rate of $\\mathcal {O}(1/r)$ [resp. $\\mathcal {O}(1/\\sqrt{r})$] when the exact (resp. stochastic) gradient is available, where $r$ is the iteration counter. Interestingly, the developed algorithm establishes a close connection among a number of (seemingly unrelated) distributed algorithms, such as the EXTRA, the PG-EXTRA, the IC/IDC-ADMM, the DLM, and the classical distributed subgradient method.
Distributed Nonsmooth Optimization With Coupled Inequality Constraints via Modified Lagrangian Function. This note considers a distributed convex optimization problem with nonsmooth cost functions and coupled nonlinear inequality constraints. To solve the problem, we first propose a modified Lagrangian function containing local multipliers and a nonsmooth penalty function. Then, we construct a distributed continuous-time algorithm by virtue of a projected primal-dual subgradient dynamics. Based on th...
Consensus problems in networks of agents with switching topology and time-delays. In this paper, we discuss consensus problems for a network of dynamic agents with flxed and switching topologies. We analyze three cases: i) networks with switching topology and no time-delays, ii) networks with flxed topology and communication time-delays, and iii) max-consensus problems (or leader determination) for groups of discrete-time agents. In each case, we introduce a linear/nonlinear consensus protocol and provide convergence analysis for the proposed distributed algorithm. Moreover, we establish a connection between the Fiedler eigenvalue of the information ∞ow in a network (i.e. algebraic connectivity of the network) and the negotiation speed (or performance) of the corresponding agreement protocol. It turns out that balanced digraphs play an important role in addressing average-consensus problems. We intro- duce disagreement functions that play the role of Lyapunov functions in convergence analysis of consensus protocols. A distinctive feature of this work is to address consen- sus problems for networks with directed information ∞ow. We provide analytical tools that rely on algebraic graph theory, matrix theory, and control theory. Simulations are provided that demonstrate the efiectiveness of our theoretical results.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
MorphoSys: An Integrated Reconfigurable System for Data-Parallel and Computation-Intensive Applications This paper introduces MorphoSys, a reconfigurable computing system developed to investigate the effectiveness of combining reconfigurable hardware with general-purpose processors for word-level, computation-intensive applications. MorphoSys is a coarse-grain, integrated, and reconfigurable system-on-chip, targeted at high-throughput and data-parallel applications. It is comprised of a reconfigurable array of processing cells, a modified RISC processor core, and an efficient memory interface unit. This paper describes the MorphoSys architecture, including the reconfigurable processor array, the control processor, and data and configuration memories. The suitability of MorphoSys for the target application domain is then illustrated with examples such as video compression, data encryption and target recognition. Performance evaluation of these applications indicates improvements of up to an order of magnitude (or more) on MorphoSys, in comparison with other systems.
Adaptive Synchronization of an Uncertain Complex Dynamical Network This brief paper further investigates the locally and globally adaptive synchronization of an uncertain complex dynamical network. Several network synchronization criteria are deduced. Especially, our hypotheses and designed adaptive controllers for network synchronization are rather simple in form. It is very useful for future practical engineering design. Moreover, numerical simulations are also given to show the effectiveness of our synchronization approaches.
Cache Games -- Bringing Access-Based Cache Attacks on AES to Practice Side channel attacks on cryptographic systems exploit information gained from physical implementations rather than theoretical weaknesses of a scheme. In recent years, major achievements were made for the class of so called access-driven cache attacks. Such attacks exploit the leakage of the memory locations accessed by a victim process. In this paper we consider the AES block cipher and present an attack which is capable of recovering the full secret key in almost real time for AES-128, requiring only a very limited number of observed encryptions. Unlike previous attacks, we do not require any information about the plaintext (such as its distribution, etc.). Moreover, for the first time, we also show how the plaintext can be recovered without having access to the cipher text at all. It is the first working attack on AES implementations using compressed tables. There, no efficient techniques to identify the beginning of AES rounds is known, which is the fundamental assumption underlying previous attacks. We have a fully working implementation of our attack which is able to recover AES keys after observing as little as 100 encryptions. It works against the OpenS SL 0.9.8n implementation of AES on Linux systems. Our spy process does not require any special privileges beyond those of a standard Linux user. A contribution of probably independent interest is a denial of service attack on the task scheduler of current Linux systems (CFS), which allows one to observe (on average) every single memory access of a victim process.
The analysis and improvement of a current-steering DACs dynamic SFDR-I: the cell-dependent delay differences For a high-accuracy current-steering digital-to-analog converters (DACs), the delay differences between the current sources is one of the major reasons that cause bad dynamic performance. In this paper, a mathematical model describing the impact of the delay differences on the DACs SFDR property is presented. The results are verified by comparison to behavioral-level simulations and to actual measurement data from published papers. Based on this analysis, the delay differences cancellation (DDC) technique to reduce the impact of the delay differences on the SFDR property is proposed and verified by simulation results.
An energy-efficient VLSI architecture for pattern recognition via deep embedding of computation in SRAM In this paper, we propose the concept of compute memory, where computation is deeply embedded into the memory (SRAM). This deep embedding enables multi-row read access and analog signal processing. Compute memory exploits the relaxed precision and linearity requirements of pattern recognition applications. System-level simulations incorporating various deterministic errors from analog signal chain demonstrates the limited accuracy of analog processing does not significantly degrade the system performance, which means the probability of pattern detection is minimally impacted. The estimated energy saving is 63 % as compared to the conventional system with standard embedded memory and parallel processing architecture, for 256×256 target image.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.1
0.1
0.1
0.1
0.1
0.05
0.002381
0
0
0
0
0
0
0
Analog-to-digital converter survey and analysis Analog-to-digital converters (ADCs) are ubiquitous, critical components of software radio and other signal processing systems. This paper surveys the state-of-the-art of ADCs, including experimental converters and commercially available parts. The distribution of resolution versus sampling rate provides insight into ADC performance limitations. At sampling rates below 2 million samples per second (Gs/s), resolution appears to be limited by thermal noise. At sampling rates ranging from ~2 Ms/s to ~4 giga samples per second (Gs/s), resolution falls off by ~1 bit for every doubling of the sampling rate. This behavior may be attributed to uncertainty in the sampling instant due to aperture jitter. For ADCs operating at multi-Gs/s rates, the speed of the device technology is also a limiting factor due to comparator ambiguity. Many ADC architectures and integrated circuit technologies have been proposed and implemented to push back these limits. The trend toward single-chip ADCs brings lower power dissipation. However, technological progress as measured by the product of the ADC resolution (bits) times the sampling rate is slow. Average improvement is only ~1.5 bits for any given sampling frequency over the last six-eight years
Design Considerations for a Direct RF Sampling Mixer This brief presents a detailed time-domain and frequency-domain analysis of a direct RF sampling mixer. Design considerations such as incomplete charge sharing and large signal nonlinearity are addressed. An accurate frequency-domain transfer function is derived. Estimation of noise figure is given. The analysis applies to the design of sub-sampling mixers that have become important for software-d...
Advanced base station technology The authors present an overview of advances in three areas including software radio, adaptive antenna technology, and high-temperature superconductivity as currently envisioned for use in advanced cellular base stations. The conclusion broadly drawn is that the advancement in DSP and ASIC technologies will provide the major contribution in making these technologies technically and commercially realistic
LC-Based Bandpass Continuous-Time Sigma-Delta Modulators With Widely Tunable Notch Frequency This paper analyses the use of bandpass continuous-time ΣΔ modulators with widely programmable notch frequency for the efficient digitization of radio-frequency signals in the next generation of software-defined-radio mobile systems. The modulator architectures under study are based on a fourth-order loop filter - implemented with two LC-based resonators - and a finite-impulsive-response feedback loop in order to increase their flexibility and degrees of freedom. Several topologies are studied, considering three different cases for the embedded digital-to-analog converter, namely: return-to-zero, non-return-to-zero and raised-cosine waveform. In all cases, a notch-aware synthesis methodology is presented, which takes into account the dependency of the loop-filter coefficients on the notch frequency and compensates for the dynamic range degradation due to the variation of the notch. The synthesized modulators are compared in terms of their sensitivity to main circuit error mechanisms and the estimated power consumption over a notch-frequency tuning range of 0.1fs to 0.4fs. Time-domain behavioral and macromodel electrical simulations validate this approach, demonstrating the feasibility of the presented methodology and architectures for the efficient and robust digitization of radio-frequency signals with a scalable resolution and programmable signal bandwidth.
All-digital TX frequency synthesizer and discrete-time receiver for Bluetooth radio in 130-nm CMOS We present a single-chip fully compliant Bluetooth radio fabricated in a digital 130-nm CMOS process. The transceiver is architectured from the ground up to be compatible with digital deep-submicron CMOS processes and be readily integrated with a digital baseband and application processor. The conventional RF frequency synthesizer architecture, based on the voltage-controlled oscillator and the ph...
Noise Analysis of Regenerative Comparators for Reconfigurable ADC Architectures The need for highly integrable and programmable analog-to-digital converters (ADCs) is pushing towards the use of dynamic regenerative comparators to maximize speed, power efficiency and reconfigurability. Comparator thermal noise is, however, a limiting factor for the achievable resolution of several ADC architectures with scaled supply voltages. While mismatch in these comparators can be compensated for by calibration, noise can irreparably hinder performance and is less straightforward to be accounted for at design time. This paper presents a method to estimate the input referred noise in fully dynamic regenerative comparators leveraging a reference architecture. A time-domain analysis is proposed that accounts for the time varying nature of the circuit exploiting some basic results from the solution of stochastic differential equations. The resulting symbolic expressions allow focusing designers' attention on the most influential noise contributors. Analysis results are validated by comparison with electrical simulations and measurement results from two ADC prototypes based on the reference comparator architecture, implemented in 0.18-mum and 90-nm CMOS technologies.
A Continuous-Time ADC/DSP/DAC System With No Clock and With Activity-Dependent Power Dissipation A continuous-time system that converts its analog input to a continuous-time digital representation without sampling, then processes the information digitally without the aid of a clock, is presented. Without sampling there is no aliasing, which reduces the in-band distortion power by not aliasing into band out-of-band distortion components. The 8-bit system, fabricated in a 90 nm CMOS process, ut...
A Multimodal CMOS MEA for High-Throughput Intracellular Action Potential Measurements and Impedance Spectroscopy in Drug-Screening Applications. Multi-electrode arrays (MEAs) are a candidate technology to screen cardiotoxicity in vitro because they enable noninvasive recording of cardiac beating rate, electrical field potential duration, and other parameters. In this paper, we present an active MEA chip featuring 16 384 electrodes, 1024 simultaneous readout channels, and 64 stimulation units (SUs) to enable six different cell-interfacing m...
Efficient filterbank channelizers for software radio receivers For cellular software radio receivers, this paper presents a computationally efficient algorithm for extracting individual radio channels from the output of the wideband A/D converter. In a software radio, the extraction of individual channels from the output of the wide band A/D converter is by far the most computationally demanding task; hence, it is very important to devise computationally efficient algorithms for this task. Our algorithm is obtained by modifying the DFT filter bank structure that is well known in the multi-rate signal processing literature (Scheuermann and Gockler, 1981; Vaidyanathan, 1990). We show that the complexity of the proposed algorithm is significantly less (2×-50×) than the complexity of the conventional channelizers
A wideband 2.4-GHz delta-sigma fractional-NPLL with 1-Mb/s in-loop modulation A phase noise cancellation technique and a charge pump linearization technique, both of which are insensitive to component errors, are presented and demonstrated as enabling components in a wideband CMOS delta-sigma fractional-N phase-locked loop (PLL). The PLL has a loop bandwidth of 460 kHz and is capable of 1-Mb/s in- loop FSK modulation at center frequencies of 2402 + k MHz for k = 0, 1, 2, .....
Fundamental control algorithms in mobile networks In this work we propose simple and efficient protocols forcounting and leader election in mobile networks. For mobilenetworks with fixed base stations we provide a new andvery efficient protocol for counting the number of mobilehosts. The main part of the work concentrates on ad-hocnetworks (no fixed subnetwork). We provide a model forthese networks and leader election (and a special form ofcounting) protocols for both named and anonymous mobilehosts. In this work we define two protocol ...
New OPBHWICAP Interface for Realtime Partial Reconfiguration of FPGA We propose in this paper, a timing analysis of dynamic partial reconfiguration (PR) applied to a NoC (Network on Chip) structure inside a FPGA. In the context of a SDR (Software Defined Radio) example, PR is used to dynamically reconfigure a baseband processing block of a 4G telecommunication chain running in real-time (data rates up to 100 Mbps). The results presented show the validity of our methodology for PR management regarding the timing performances obtained in a real implementation. PR timing is a key point to make SDR approach realistic. These results show that using PR, FPGAs combine the flexibility of SW (software) and the processing power of HW (hardware). This makes PR a tremendous enabling technology for SDR. These results are based on a new IP managing the ICAP component that allows a gain in time of a rate of 124 comparing to the provided OPBHWICAP. Moreover, we have integrated a methodology which can reduce significantly the bitstream size and consequently the reconfiguration duration. The results presented in this paper show that PR reconfiguration time can go downto a few tens of microseconds. This makes PR really attractive for SDR design or any other highly demanding real-time applications.
Electromagnetic regenerative suspension system for ground vehicles This paper considers an electromagnetic regenerative suspension system (ERSS) that recovers the kinetic energy originated from vehicle vibration, which is previously dissipated in traditional shock absorbers. It can also be used as a controllable damper that can improve the vehicle's ride and handling performance. The proposed electromagnetic regenerative shock absorbers (ERSAs) utilize a linear or a rotational electromagnetic generator to convert the kinetic energy from suspension vibration into electricity, which can be used to reduce the load on the alternator so as to improve fuel efficiency. A complete ERSS is discussed here that includes the regenerative shock absorber, the power electronics for power regulation and suspension control, and an electronic control unit (ECU). Different shock absorber designs are proposed and compared for simplicity, efficiency, energy density, and controlled suspension performances. Both simulation and experiment results are presented and discussed.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.011377
0.009815
0.008963
0.008696
0.002652
0.000678
0.00013
0.000058
0.000015
0
0
0
0
0
Isometric Torque Control for Neuromuscular Electrical Stimulation With Time-Varying Input Delay. Previous results have shown experimental evidence that the muscle response to neuromuscular electrical stimulation (NMES) is delayed; the time lag is often referred to as electromechanical delay. NMES closed-loop control methods have been developed to compensate for a known constant input delay. However, as a muscle fatigues, this delay increases. This paper develops a feedback controller that robustly compensates for the time-varying delay of an uncertain muscle model during isometric contractions. The controller is proven to yield global uniformly ultimately bounded torque tracking error. Experimental results illustrate the effectiveness of the developed controller and the time-varying nature of the delayed response.
Robustness of Adaptive Control under Time Delays for Three-Dimensional Curve Tracking. We analyze the robustness of a class of controllers that enable three-dimensional curve tracking by a free moving particle. The free particle tracks the closest point on the curve. By building a strict Lyapunov function and robustly forward invariant sets, we show input-to-state stability under predictable tolerance and safety bounds that guarantee robustness under control uncertainty, input delays, and a class of polygonal state constraints, including adaptive tracking and parameter identification under unknown control gains. Such an understanding may provide certified performance when the control laws are applied to real-life systems.
Robust compensation of a chattering time-varying input delay We investigate the design of a prediction-based controller for a linear system subject to a time-varying input delay, not necessarily causal. This means that the information feeding the system can be older than ones previously received. We propose to use the current delay value in the prediction employed in the control law. Modeling the input delay as a transport Partial Differential Equation, we prove asymptotic tracking of the system state, providing that the average ℒ2-norm of the delay time-derivative is sufficiently small. This result is obtained by generalizing Halanay inequality to time-varying differential inequalities.
Asymptotic stability for time-variant systems and observability: Uniform and nonuniform criteria This paper presents some new criteria for uniform and nonuniform asymptotic stability of equilibria for time-variant differential equations and this within a Lyapunov approach. The stability criteria are formulated in terms of certain observability conditions with the output derived from the Lyapunov function. For some classes of systems, this system theoretic interpretation proves to be fruitful since-after establishing the invariance of observability under output injection-this enables us to check the stability criteria on a simpler system. This procedure is illustrated for some classical examples.
Local Stabilization of Nonlinear Systems Through the Reduction Model Approach. We study a general class of nonlinear systems with input delays of arbitrary size. We adapt the reduction model approach to prove local asymptotic stability of the closed loop input delayed systems, using feedbacks that may be nonlinear. Our Lyapunov-Krasovskii functionals make it possible to determine estimates of the basins of attraction for the closed loop systems.
Stabilization of nonlinear delay systems using approximate predictors and high-gain observers We provide a solution to the heretofore open problem of stabilization of systems with arbitrarily long delays at the input and output of a nonlinear system using output feedback only. The solution is global, employs the predictor approach over the period that combines the input and output delays, addresses nonlinear systems with sampled measurements and with control applied using a zero-order hold, and requires that the sampling/holding periods be sufficiently short, though not necessarily constant. Our approach considers a class of globally Lipschitz strict-feedback systems with disturbances and employs an appropriately constructed successive approximation of the predictor map, a high-gain sampled-data observer, and a linear stabilizing feedback for the delay-free system. The obtained results guarantee robustness to perturbations of the sampling schedule and different sampling and holding periods are considered. The approach is specialized to linear systems, where the predictor is available explicitly.
Continuous–discrete adaptive observers for state affine systems The observation of a class of multi-input multi-output (MIMO) state affine systems with constant unknown parameters and discrete time output measurements is addressed. Assuming some persistent excitation conditions to hold and the sampling steps to satisfy some boundedness hypotheses, system observability is ensured and a class of global exponential observers is synthesized.
Impossibility of distributed consensus with one faulty process The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the “Byzantine Generals” problem.
InterCloud: utility-oriented federation of cloud computing environments for scaling of application services Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments The proposed InterCloud environment supports scaling of applications across multiple vendor clouds We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.
Error Exponents for Asymmetric Two-User Discrete Memoryless Source-Channel Systems Consider transmitting two discrete memoryless correlated sources, consisting of a common and a private source, over a discrete memoryless multi-terminal channel with two transmitters and two receivers. At the transmitter side, the common source is observed by both encoders but the private source can only be accessed by one encoder. At the receiver side, both decoders need to reconstruct the common source, but only one decoder needs to reconstruct the private source. We hence refer to this system by the asymmetric 2-user source-channel system. In this work, we derive a universally achievable joint source-channel coding (JSCC) error exponent pair for the 2-user system by using a technique which generalizes Csiszar's method (1980) for the point- to-point (single-user) discrete memoryless source-channel system. We next investigate the largest convergence rate of asymptotic exponential decay of the system (overall) probability of erroneous transmission, i.e., the system JSCC error exponent. We obtain lower and upper bounds for the exponent. As a consequence, we establish the JSCC theorem with single letter characterization.
A process calculus for Mobile Ad Hoc Networks We present the @w-calculus, a process calculus for formally modeling and reasoning about Mobile Ad Hoc Wireless Networks (MANETs) and their protocols. The @w-calculus naturally captures essential characteristics of MANETs, including the ability of a MANET node to broadcast a message to any other node within its physical transmission range (and no others), and to move in and out of the transmission range of other nodes in the network. A key feature of the @w-calculus is the separation of a node's communication and computational behavior, described by an @w-process, from the description of its physical transmission range, referred to as an @w-process interface. Our main technical results are as follows. We give a formal operational semantics of the @w-calculus in terms of labeled transition systems and show that the state reachability problem is decidable for finite-control @w-processes. We also prove that the @w-calculus is a conservative extension of the @p-calculus, and that late bisimulation equivalence (appropriately lifted from the @p-calculus to the @w-calculus) is a congruence. Congruence results are also established for a weak version of late bisimulation equivalence, which abstracts away from two types of internal actions: @t-actions, as in the @p-calculus, and @m-actions, signaling node movement. We additionally define a symbolic semantics for the @w-calculus extended with the mismatch operator, along with a corresponding notion of symbolic bisimulation equivalence, and establish congruence results for this extension as well. Finally, we illustrate the practical utility of the calculus by developing and analyzing formal models of a leader election protocol for MANETs and the AODV routing protocol.
Efficiency of a Regenerative Direct-Drive Electromagnetic Active Suspension. The efficiency and power consumption of a direct-drive electromagnetic active suspension system for automotive applications are investigated. A McPherson suspension system is considered, where the strut consists of a direct-drive brushless tubular permanent-magnet actuator in parallel with a passive spring and damper. This suspension system can both deliver active forces and regenerate power due to imposed movements. A linear quadratic regulator controller is developed for the improvement of comfort and handling (dynamic tire load). The power consumption is simulated as a function of the passive damping in the active suspension system. Finally, measurements are performed on a quarter-car test setup to validate the analysis and simulations.
A 12.8 GS/s Time-Interleaved ADC With 25 GHz Effective Resolution Bandwidth and 4.6 ENOB This paper presents a 12.8 GS/s 32-way hierarchically time-interleaved SAR ADC with 4.6 ENOB in 65 nm CMOS. The prototype utilizes hierarchical sampling and cascode sampler circuits to enable greater than 25 GHz 3 dB effective resolution bandwidth (ERBW). We further employ a pseudo-differential SAR ADC to save power and area. The core circuit occupies only 0.23 mm 2 and consumes a total of 162 mW from dual 1.2 V/1.1 V supplies. The design achieves a SNDR of 29.4 dB at low frequencies and 26.4 dB at 25 GHz, resulting in a figure-of-merit of 0.79 pJ/conversion-step. As will be further described in the paper, the circuit architecture used in this prototype enables expansion to 25.6 GS/s or 51.2 GS/s via additional interleaving without significantly impacting ERBW.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.205
0.205
0.103
0.1025
0.05275
0.035444
0.000704
0
0
0
0
0
0
0
Aker: A Design and Verification Framework for Safe and Secure SoC Access Control Modern systems on a chip (SoCs) utilize heterogeneous architectures where multiple IP cores have concurrent access to on-chip shared resources. In security-critical applications, IP cores have different privilege levels for accessing shared resources, which must be regulated by an access control system. Aker is a design and verification framework for SoC access control. Aker builds upon the Access...
WHISK: an uncore architecture for dynamic information flow tracking in heterogeneous embedded SoCs In this paper, we describe for the first time, how Dynamic Information Flow Tracking (DIFT) can be implemented for heterogeneous designs that contain one or more on-chip accelerators attached to a network-on-chip. We observe that implementing DIFT for such systems requires holistic platform level view, i.e., designing individual components in the heterogeneous system to be capable of supporting DIFT is necessary but not sufficient to correctly implement full-system DIFT. Based on this observation we present a new system architecture for implementing DIFT, and also describe wrappers that provide DIFT functionality for third-party IP components. Results show that our implementation minimally impacts performance of programs that do not utilize DIFT, and the price of security is constant for modest amounts of tagging and then sub-linearly increases with the amount of tagging.
Hardware-Assisted Detection of Malicious Software in Embedded Systems One of the critical security threats to computer systems is the execution of malware or malicious software. Several intrusion detection systems have been proposed which perform detection analysis in the software using the audit files generated by the operating system. Software-based solutions to this problem are relatively slow, so these techniques can be used forensically, but not in real-time to stop an exploit before it has an opportunity to do damage. We present a technique to implement intrusion detection for secure embedded systems by detecting behavioral differences between the correct system and the malware. The system is implemented using FPGA logic to enable the detection process to be regularly updated to adapt to new malware and changing system behavior.
SHIELD: a software hardware design methodology for security and reliability of MPSoCs Security of MPSoCs is an emerging area of concern in embedded systems. Security is jeopardized by code injection attacks, which are the most common types of software attacks. Previous attempts to detect code injection in MPSoCs have been burdened with significant performance overheads. In this work, we present a hardware/software methodology "SHIELD" to detect code injection attacks in MPSoCs. SHIELD instruments the software programs running on application processors in the MPSoC and also extracts control flow and basic block execution time information for runtime checking. We employ a dedicated security processor (monitor processor) to supervise the application processors on the MPSoC. Custom hardware is designed and used in the monitor and application processors. The monitor processor uses the custom hardware to rapidly analyze information communicated to it from the application processors at runtime. We have implemented SHIELD on a commercial extensible processor (Xtensa LX2) and tested it on a multiprocessor JPEG encoder program. In addition to code injection attacks, the system is also able to detect 83% of bit flips errors in the control flow instructions. The experiments show that SHIELD produces systems with runtime which is at least 9 times faster than the previous solution. SHIELD incurs a runtime (clock cycles) performance overhead of only 6.6% and an area overhead of 26.9%, when compared to a non-secure system.
Towards decentralized system-level security for MPSoC-based embedded applications. With the increasing connectivity and complexity of embedded systems, security issues have become a key consideration in design. In this paper, we propose a decentralized system-level approach for isolating application tasks without the need to rely on a centralized privileged authority at run-time. We discuss the need for isolation to reduce the potential impact of a task compromise or untrustworthy IP block, and present mechanisms to allow for safe sharing of memory regions and IP blocks between tasks in the system. After exploring the architectural requirements for enforcing our security model we present a hardware Isolation Unit, which can be customized for different types of dynamic permission changes depending on task-resource relationships and added to heterogeneous MPSoCs to enforce our security approach.
An In-depth Study of MPU-Based Isolation Techniques. Many attacks have been reported and published targeting constrained embedded systems. Attackers try to exploit vulnerabilities through all possible layers of abstraction. A single vulnerability can be enough to take over the whole device and change its intended behavior. Hardware/software isolation architectures implemented in embedded devices provide access control mechanisms to establish a protected execution environment and guarantee the behavior of the running applications. They enforce the boundaries to stop a malicious flaw from propagating from one application to others, especially those that are critical. They represent a form of resilience to different exploits. This paper provides a detailed study of existing memory protection unit-based isolation architectures for lightweight devices and defines four important criteria to evaluate and compare architectures from both academia and industry. Outcomes of this work will help developers and hardware designers to find balance between performance and security.
Time-Predictable Acceleration of Deep Neural Networks on FPGA SoC Platforms This work focuses on the time-predictable execution of Deep Neural Networks (DNNs) accelerated on FPGA System-on-Chips (SoCs). The modern DPU accelerator by Xilinx is considered. An extensive profiling campaign targeting the Zynq Ultrascale+ platform has been performed to study the execution behavior of the DPU when accelerating a set of state-of-the-art DNNs for Advanced Driver Assistance Systems...
Ad-hoc On-Demand Distance Vector Routing This paper describes work carried out as part of the GUIDE project at Lancaster University. The overall aim of the project is to develop a context-sensitive tourist guide for visitors to the city of Lancaster. Visitors are equipped with portable GUIDE ...
Trellis-coded modulation with bit interleaving and iterative decoding This paper considers bit-interleaved coded modulation (BICM) for bandwidth-efficient transmission using software radios. A simple iterative decoding (ID) method with hard-decision feedback is suggested to achieve better performance. The paper shows that convolutional codes with good Hamming-distance property can provide both high diversity order and large free Euclidean distance for BICM-ID. The method offers a common framework for coded modulation over channels with a variety of fading statistics. In addition, BICM-ID allows an efficient combination of punctured convolutional codes and multiphase/level modulation, and therefore provides a simple mechanism for variable-rate transmission
A review of process fault detection and diagnosis: Part II: Qualitative models and search strategies In this part of the paper, we review qualitative model representations and search strategies used in fault diagnostic systems. Qualitative models are usually developed based on some fundamental understanding of the physics and chemistry of the process. Various forms of qualitative models such as causal models and abstraction hierarchies are discussed. The relative advantages and disadvantages of these representations are highlighted. In terms of search strategies, we broadly classify them as topographic and symptomatic search techniques. Topographic searches perform malfunction analysis using a template of normal operation, whereas, symptomatic searches look for symptoms to direct the search to the fault location. Various forms of topographic and symptomatic search strategies are discussed.
The Quadrature LC Oscillator: A Complete Portrait Based on Injection Locking We show that the quadrature LC oscillator is best treated as two strongly coupled, nominally identical oscillators that are locked to the same frequency. Differential equations that extend Adler&#39;s description of locking to strong injection reveal the full dynamics of this circuit. With a simplifying insight, the analysis reveals all the modes of the oscillator, their stability, the effects of mism...
Minimum-Cost Data Delivery in Heterogeneous Wireless Networks With various wireless technologies developed, a ubiquitous and integrated architecture is envisioned for future wireless communication. An important optimization issue in such an integrated system is how to minimize the overall communication cost by intelligently utilizing the available heterogeneous wireless technologies while, at the same time, meeting the quality-of-service requirements of mobi...
CCFI: Cryptographically Enforced Control Flow Integrity Control flow integrity (CFI) restricts jumps and branches within a program to prevent attackers from executing arbitrary code in vulnerable programs. However, traditional CFI still offers attackers too much freedom to chose between valid jump targets, as seen in recent attacks. We present a new approach to CFI based on cryptographic message authentication codes (MACs). Our approach, called cryptographic CFI (CCFI), uses MACs to protect control flow elements such as return addresses, function pointers, and vtable pointers. Through dynamic checks, CCFI enables much finer-grained classification of sensitive pointers than previous approaches, thwarting all known attacks and resisting even attackers with arbitrary access to program memory. We implemented CCFI in Clang/LLVM, taking advantage of recently available cryptographic CPU instructions (AES-NI). We evaluate our system on several large software packages (including nginx, Apache and memcache) as well as all their dependencies. The cost of protection ranges from a 3--18% decrease in server request rate. We also expect this overhead to shrink as Intel improves the performance AES-NI.
A Sub- $\mu$ W Reconfigurable Front-End for Invasive Neural Recording That Exploits the Spectral Characteristics of the Wideband Neural Signal This paper presents a sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> ac-coupled reconfigurable front-end for invasive wideband neural signal recording. The proposed topology embeds filtering capabilities enabling the selection of different frequency bands inside the neural signal spectrum. Power consumption is optimized by defining specific noise targets for each sub-band. These targets take into account the spectral characteristics of wideband neural signals: local field potentials (LFP) exhibit <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mathrm {1/f^{x}}$ </tex-math></inline-formula> magnitude scaling while action potentials (AP) show uniform magnitude across frequency. Additionally, noise targets also consider electrode noise and the spectral distribution of noise sources in the circuit. An experimentally verified prototype designed in a standard 180 nm CMOS process draws 815 nW from a 1 V supply. The front-end is able to select among four different frequency bands (modes) up to 5 kHz. The measured input-referred spot-noise at 500 Hz in the LFP mode (1 Hz - 700 Hz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$55~nV/\sqrt {Hz}$ </tex-math></inline-formula> while the integrated noise in the AP mode (200 Hz - 5 kHz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$4.1~\mu Vrms$ </tex-math></inline-formula> . The proposed front-end achieves sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> operation without penalizing other specifications such as input swing, common-mode or power-supply rejection ratios. It reduces the power consumption of neural front-ends with spectral selectivity by <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$6.1\times $ </tex-math></inline-formula> and, compared with conventional wideband front-ends, it obtains a reduction of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.5\times $ </tex-math></inline-formula> .
1.066667
0.066667
0.066667
0.066667
0.066667
0.066667
0.033333
0
0
0
0
0
0
0
A Highly Sensitive Fiber-Optic Fabry-Perot Interferometer Based on Internal Reflection Mirrors for Refractive Index Measurement. In this study, a new type of highly sensitive fiber-optic Fabry-Perot interferometer (FFPI) is proposed with a high sensitivity on a wide refractive index (RI) measurement range based on internal reflection mirrors of micro-cavity. The sensor head consists of a single-mode fiber (SMF) with an open micro-cavity. Since light reflections of gold thin films are not affected by the RI of different measuring mediums, the sensor is designed to improve the fringe visibility of optical interference through sputtering the gold films of various thicknesses on the inner surfaces of the micro-cavity, as a semi-transparent mirror (STM) and a total-reflection mirror (TRM). Experiments have been carried out to verify the feasibility of the sensor's design. It is shown that the fabricated sensor has strong interference visibility exceeding 15 dB over a wide measurement range of RI, and the sensor sensitivity is higher than 1160 nm/RIU, and RI resolution is better than 1.0 x 10(-6) RIU.
Photonic crystal fiber Mach-Zehnder interferometer for refractive index sensing. We report on a refractive index sensor using a photonic crystal fiber (PCF) interferometer which was realized by fusion splicing a short section of PCF (Blaze Photonics, LMA-10) between two standard single mode fibers. The fully collapsed air holes of the PCF at the spice regions allow the coupling of PCF core and cladding modes that makes a Mach-Zehnder interferometer. The transmission spectrum exhibits sinusoidal interference pattern which shifts differently when the cladding/core surface of the PCF is immersed with different RI of the surrounding medium. Experimental results using wavelength-shift interrogation for sensing different concentrations of sucrose solution show that a resolution of 1.62 x 10(-4)-8.88 x 10(-4) RIU or 1.02 x 10(-4)-9.04 x 10(-4) RIU (sensing length for 3.50 or 5.00 cm, respectively) was achieved for refractive indices in the range of 1.333 to 1.422, suggesting that the PCF interferometer are attractive for chemical, biological, biochemical sensing with aqueous solutions, as well as for civil engineering and environmental monitoring applications.
Fiber optic sensor for acoustic detection of partial discharges in oil-paper insulated electrical systems. A fiber optic interferometric sensor with an intrinsic transducer along a length of the fiber is presented for ultrasound measurements of the acoustic emission from partial discharges inside oil-filled power apparatus. The sensor is designed for high sensitivity measurements in a harsh electromagnetic field environment, with wide temperature changes and immersion in oil. It allows enough sensitivity for the application, for which the acoustic pressure is in the range of units of Pa at a frequency of 150 kHz. In addition, the accessibility to the sensing region is guaranteed by immune fiber-optic cables and the optical phase sensor output. The sensor design is a compact and rugged coil of fiber. In addition to a complete calibration, the in-situ results show that two types of partial discharges are measured through their acoustic emissions with the sensor immersed in oil.
In-line fiber optic interferometric sensors in single-mode fibers. In-line fiber optic interferometers have attracted intensive attention for their potential sensing applications in refractive index, temperature, pressure and strain measurement, etc. Typical in-line fiber-optic interferometers are of two types: Fabry-Perot interferometers and core-cladding-mode interferometers. It's known that the in-line fiber optic interferometers based on single-mode fibers can exhibit compact structures, easy fabrication and low cost. In this paper, we review two kinds of typical in-line fiber optic interferometers formed in single-mode fibers fabricated with different post-processing techniques. Also, some recently reported specific technologies for fabricating such fiber optic interferometers are presented.
Interferometric fiber optic sensors. Fiber optic interferometers to sense various physical parameters including temperature, strain, pressure, and refractive index have been widely investigated. They can be categorized into four types: Fabry-Perot, Mach-Zehnder, Michelson, and Sagnac. In this paper, each type of interferometric sensor is reviewed in terms of operating principles, fabrication methods, and application fields. Some specific examples of recently reported interferometeric sensor technologies are presented in detail to show their large potential in practical applications. Some of the simple to fabricate but exceedingly effective Fabry-Perot interferometers, implemented in both extrinsic and intrinsic structures, are discussed. Also, a wide variety of Mach-Zehnder and Michelson interferometric sensors based on photonic crystal fibers are introduced along with their remarkable sensing performances. Finally, the simultaneous multi-parameter sensing capability of a pair of long period fiber grating (LPG) is presented in two types of structures; one is the Mach-Zehnder interferometer formed in a double cladding fiber and the other is the highly sensitive Sagnac interferometer cascaded with an LPG pair.
Study on the application of an ultra-high-frequency fractal antenna to partial discharge detection in switchgears. The ultra-high-frequency (UHF) method is used to analyze the insulation condition of electric equipment by detecting the UHF electromagnetic (EM) waves excited by partial discharge (PD). As part of the UHF detection system, the UHF sensor determines the detection system performance in signal extraction and recognition. In this paper, a UHF antenna sensor with the fractal structure for PD detection in switchgears was designed by means of modeling, simulation and optimization. This sensor, with a flat-plate structure, had two resonance frequencies of 583 MHz and 732 MHz. In the laboratory, four kinds of insulation defect models were positioned in the testing switchgear for typical PD tests. The results show that the sensor could reproduce the electromagnetic waves well. Furthermore, to optimize the installation position of the inner sensor for achieving best detection performance, the precise simulation model of switchgear was developed to study the propagation characteristics of UHF signals in switchgear by finite-difference time-domain (FDTD) method. According to the results of simulation and verification test, the sensor should be positioned at the right side of bottom plate in the front cabinet. This research established the foundation for the further study on the application of UHF technique in switchgear PD online detection.
Cache operations by MRU change The performance of set associative caches is analyzed. The method used is to group the cache lines into regions according to their positions in the replacement stacks of a cache, and then to observe how the memory access of a CPU is distributed over these regions. Results from the preserved CPU traces show that the memory accesses are heavily concentrated on the most recently used (MRU) region in the cache. The concept of MRU change is introduced; the idea is to use the event that the CPU accesses a non-MRU line to approximate the time the CPU is changing its working set. The concept is shown to be useful in many aspects of cache design and performance evaluation, such as comparison of various replacement algorithms, improvement of prefetch algorithms, and speedup of cache simulation.
Broadband MIMO-OFDM Wireless Communications Orthogonal frequency division multiplexing (OFDM) is a popular method for high data rate wireless transmission. OFDM may be combined with antenna arrays at the transmitter and receiver to increase the diversity gain and/or to enhance the system capacity on time-varying and frequency-selective channels, resulting in a multiple-input multiple-output (MIMO) configuration. The paper explores various p...
Supporting Aggregate Queries Over Ad-Hoc Wireless Sensor Networks We show how the database community's notion of a generic query interface for data aggregation can be applied to ad-hoc networks of sensor devices. As has been noted in the sensor network literature, aggregation is important as a data reduction tool; networking approaches, however, have focused on application specific solutions, whereas our in-network aggregation approach is driven by a general purpose, SQL-style interface that can execute queries over any type of sensor data while providing opportunities for significant optimization. We present a variety of techniques to improve the reliability and performance of our solution. We also show how grouped aggregates can be efficiently computed and offer a comparison to related systems and database projects.
Exploiting ILP, TLP, and DLP with the polymorphous TRIPS architecture This paper describes the polymorphous TRIPS architecture which can be configured for different granularities and types of parallelism. TRIPS contains mechanisms that enable the processing cores and the on-chip memory system to be configured and combined in different modes for instruction, data, or thread-level parallelism. To adapt to small and large-grain concurrency, the TRIPS architecture contains four out-of-order, 16-wide-issue Grid Processor cores, which can be partitioned when easily extractable fine-grained parallelism exists. This approach to polymorphism provides better performance across a wide range of application types than an approach in which many small processors are aggregated to run workloads with irregular parallelism. Our results show that high performance can be obtained in each of the three modes--ILP, TLP, and DLP-demonstrating the viability of the polymorphous coarse-grained approach for future microprocessors.
A 10-Gb/s CMOS clock and data recovery circuit with a half-rate binary phase/frequency detector A 10-Gb/s phase-locked clock and data recovery circuit incorporates a multiphase LC oscillator and a half-rate phase/frequency detector with automatic data retiming. Fabricated in 0.18-μm CMOS technology in an area of 1.75×1.55 mm2, the circuit exhibits a capture range of 1.43 GHz, an rms jitter of 0.8 ps, a peak-to-peak jitter of 9.9 ps, and a bit error rate of 10-9 with a pseudorandom bit sequence of 223-1. The power dissipation excluding the output buffers is 91 mW from a 1.8-V supply.
Design and implementation of a reconfigurable FIR filter Finite impulse response (FIR) filters are very important blocks in digital communication systems. Many efforts have been made to improve the filter performance, e.g., less hardware and higher speed. In addition, software radio has recently gained much attention due to the need for integrated and reconfigurable communication systems. To this end, reconfigurability has become an important issue for the future filter design. In this paper, we present a digit-reconfigurable FIR filter architecture with the finest granularity. The proposed architecture is implemented in a single-poly quadruple-metal 0.35-μm CMOS technology. Measurement results show that the fabricated chip consumes 16.5 mW of power when operating at 86 MHz under 2.5 V.
Understanding the regenerative comparator circuit The regenerative comparator circuit which lies at the heart of A/D conversion, slicer circuits, and memory sensing, is unstable, time-varying, nonlinear, and with multiple equilibria. That does not mean, as this paper shows, that it cannot be understood with simple equivalent circuits that reveal its dynamics completely, and enable it to be designed to specifications on static and dynamic offset and noise. The analysis is applied to the StrongArm latch.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.053875
0.03
0.0205
0.005812
0.003875
0.0015
0
0
0
0
0
0
0
0
Implementation of a Cross-Layer Sensing Medium-Access Control Scheme. In this paper, compressed sensing (CS) theory is utilized in a medium-access control (MAC) scheme for wireless sensor networks (WSNs). We propose a new, cross-layer compressed sensing medium-access control (CL CS-MAC) scheme, combining the physical layer and data link layer, where the wireless transmission in physical layer is considered as a compress process of requested packets in a data link layer according to compressed sensing (CS) theory. We first introduced using compressive complex requests to identify the exact active sensor nodes, which makes the scheme more efficient. Moreover, because the reconstruction process is executed in a complex field of a physical layer, where no bit and frame synchronizations are needed, the asynchronous and random requests scheme can be implemented without synchronization payload. We set up a testbed based on software-defined radio (SDR) to implement the proposed CL CS-MAC scheme practically and to demonstrate the validation. For large-scale WSNs, the simulation results show that the proposed CL CS-MAC scheme provides higher throughput and robustness than the carrier sense multiple access (CSMA) and compressed sensing medium-access control (CS-MAC) schemes.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Time-varying graphs and dynamic networks The past decade has seen intensive research efforts on highly dynamic wireless and mobile networks (variously called delay-tolerant, disruptivetolerant, challenged, opportunistic, etc) whose essential feature is a possible absence of end-to-end communication routes at any instant. As part of these efforts, a number of important concepts have been identified, based on new meanings of distance and connectivity. The main contribution of this paper is to review and integrate the collection of these concepts, formalisms, and related results found in the literature into a unified coherent framework, called TVG (for timevarying graphs). Besides this definitional work, we connect the various assumptions through a hierarchy of classes of TVGs defined with respect to properties with algorithmic significance in distributed computing. One of these classes coincides with the family of dynamic graphs over which population protocols are defined. We examine the (strict) inclusion hierarchy among the classes. The paper also provides a quick review of recent stochastic models for dynamic networks that aim to enable analytical investigation of the dynamics.
On Temporal Graph Exploration. A temporal graph is a graph in which the edge set can change from step to step. The temporal graph exploration problem TEXP is the problem of computing a foremost exploration schedule for a temporal graph, i.e., a temporal walk that starts at a given start node, visits all nodes of the graph, and has the smallest arrival time. We consider only temporal graphs that are connected at each step. For such temporal graphs with n nodes, we show that it is NP-hard to approximate TEXP with ratio O(n(1-e)) for any e > 0. We also provide an explicit construction of temporal graphs that require theta(n(2)) steps to be explored. We then consider TEXP under the assumption that the underlying graph (i. e. the graph that contains all edges that are present in the temporal graph in at least one step) belongs to a specific class of graphs. Among other results, we show that temporal graphs can be explored in O(n(1.5)k(2) log n) steps if the underlying graph has treewidth k and in O(n log(3) n) steps if the underlying graph is a 2 x n grid. We also show that sparse temporal graphs with regularly present edges can always be explored in O(n) steps.
Exploration of the T-Interval-Connected Dynamic Graphs: The Case of the Ring In this paper, we study the T-interval-connected dynamic graphs from the point of view of the time necessary and sufficient for their exploration by a mobile entity (agent). A dynamic graph (more precisely, an evolving graph) is T-interval-connected (T ≤ 1) if, for every window of T consecutive time steps, there exists a connected spanning subgraph that is stable (always present) during this period. This property of connection stability over time was introduced by Kuhn, Lynch and Oshman [6] (STOC 2010). We focus on the case when the underlying graph is a ring of size n, and we show that the worst-case time complexity for the exploration problem is 2n '—' T '—' Θ(1) time units if the agent knows the dynamics of the graph, and <InlineEquation ID=\"IEq1\" <EquationSource Format=\"TEX\"$n+ \\frac{n}{\\max\\{1, T-1\\} } (\\delta-1) \\pm \\Theta(\\delta)$</EquationSource> </InlineEquation> time units otherwise, where ﾿ is the maximum time between two successive appearances of an edge.
Causality, influence, and computation in possibly disconnected synchronous dynamic networks In this work, we study the propagation of influence and computation in dynamic distributed computing systems that are possibly disconnected at every instant. We focus on a synchronous message-passing communication model with broadcast and bidirectional links. Our network dynamicity assumption is a worst-case dynamicity controlled by an adversary scheduler, which has received much attention recently. We replace the usual (in worst-case dynamic networks) assumption that the network is connected at every instant by minimal temporal connectivity conditions. Our conditions only require that another causal influence occurs within every time window of some given length. Based on this basic idea, we define several novel metrics for capturing the speed of information spreading in a dynamic network. We present several results that correlate these metrics. Moreover, we investigate termination criteria in networks in which an upper bound on any of these metrics is known. We exploit our termination criteria to provide efficient (and optimal in some cases) protocols that solve the fundamental counting and all-to-all token dissemination (or gossip) problems.
Brief announcement: naming and counting in anonymous unknown dynamic networks Contribution. We study the fundamental naming and counting problems in networks that are anonymous, unknown, and possibly dynamic. Network dynamicity is modeled by the 1-interval connectivity model [KLO10]. We first prove that on static networks with broadcast counting is impossible to solve without a leader and that naming is impossible to solve even with a leader and even if nodes know n. These impossibilities carry over to dynamic networks as well. With a leader we solve counting in linear time. Then we focus on dynamic networks with broadcast. We show that if nodes know an upper bound on the maximum degree that will ever appear then they can obtain an upper bound on n. Finally, we replace broadcast with one-to-each, in which a node may send a different message to each of its neighbors. This variation is then proved to be computationally equivalent to a full-knowledge model with unique names.
An Identity-Free and On-Demand Routing Scheme against Anonymity Threats in Mobile Ad Hoc Networks Introducing node mobility into the network also introduces new anonymity threats. This important change of the concept of anonymity has recently attracted attentions in mobile wireless security research. This paper presents identity-free routing and on-demand routing as two design principles of anonymous routing in mobile ad hoc networks. We devise ANODR (ANonymous On-Demand Routing) as the needed anonymous routing scheme that is compliant with the design principles. Our security analysis and simulation study verify the effectiveness and efficiency of ANODR.
Conscious and Unconscious Counting on Anonymous Dynamic Networks. This paper addresses the problem of counting the size of a network where i processes have the same identifiers anonymous nodes and ii the network topology constantly changes dynamic network. Changes are driven by a powerful adversary that can look at internal process states and add and remove edges in order to contrast the convergence of the algorithm to the correct count. The paper proposes two leader-based counting algorithms. Such algorithms are based on a technique that mimics an energy-transfer between network nodes. The first algorithm assumes that the adversary cannot generate either disconnected network graphs or network graphs where nodes have degree greater than D. In such algorithm, the leader can count the size of the network and detect the counting termination in a finite time i.e., conscious counting algorithm. The second algorithm assumes that the adversary only keeps the network graph connected at any time and we prove that the leader can still converge to a correct count in a finite number of rounds, but it is not conscious when this convergence happens.
Gossip-based aggregation in large dynamic networks As computer networks increase in size, become more heterogeneous and span greater geographic distances, applications must be designed to cope with the very large scale, poor reliability, and often, with the extreme dynamism of the underlying network. Aggregation is a key functional building block for such applications: it refers to a set of functions that provide components of a distributed system access to global information including network size, average load, average uptime, location and description of hotspots, and so on. Local access to global information is often very useful, if not indispensable for building applications that are robust and adaptive. For example, in an industrial control application, some aggregate value reaching a threshold may trigger the execution of certain actions; a distributed storage system will want to know the total available free space; load-balancing protocols may benefit from knowing the target average load so as to minimize the load they transfer. We propose a gossip-based protocol for computing aggregate values over network components in a fully decentralized fashion. The class of aggregate functions we can compute is very broad and includes many useful special cases such as counting, averages, sums, products, and extremal values. The protocol is suitable for extremely large and highly dynamic systems due to its proactive structure---all nodes receive the aggregate value continuously, thus being able to track any changes in the system. The protocol is also extremely lightweight, making it suitable for many distributed applications including peer-to-peer and grid computing systems. We demonstrate the efficiency and robustness of our gossip-based protocol both theoretically and experimentally under a variety of scenarios including node and communication failures.
Directed diffusion: a scalable and robust communication paradigm for sensor networks Advances in processor, memory and radio technology will enable small and cheap nodes capable of sensing, communication and computation. Networks of such nodes can coordinate to perform distributed sensing of environmental phenomena. In this paper, we explore the directed diffusion paradigm for such coordination. Directed diffusion is datacentric in that all communication is for named data. All nodes in a directed diffusion-based network are application-aware. This enables diffusion to achieve energy savings by selecting empirically good paths and by caching and processing data in-network. We explore and evaluate the use of directed diffusion for a simple remote-surveillance sensor network.
The PARSEC benchmark suite: characterization and architectural implications This paper presents and characterizes the Princeton Application Repository for Shared-Memory Computers (PARSEC), a benchmark suite for studies of Chip-Multiprocessors (CMPs). Previous available benchmarks for multiprocessors have focused on high-performance computing applications and used a limited number of synchronization methods. PARSEC includes emerging applications in recognition, mining and synthesis (RMS) as well as systems applications which mimic large-scale multithreaded commercial programs. Our characterization shows that the benchmark suite covers a wide spectrum of working sets, locality, data sharing, synchronization and off-chip traffic. The benchmark suite has been made available to the public.
A new concept for wireless reconfigurable receivers In this article we present the Self-Adaptive Universal Receiver (SAUR), a novel wireless reconfigurable receiver architecture. This scheme is based on blind recognition of the system in use, operating on a new radio interface comprising two functional phases. The first phase performs a wideband analysis (WBA) on the received signal to determine its standard. The second phase corresponds to demodulation. Here we only focus on the WBA phase, which consists of an iterative process to find the bandwidth compatible with the associated signal processing techniques. The blind standard recognition performed in the last iteration step of this process uses radial basis function neural networks. This allows a strong analogy between our approach and conventional pattern recognition problems. The efficiency of this type of blind recognition is illustrated with the results of extensive simulations performed in our laboratory using true data of received signals.
A proposal of architectural elements for implementing secure software download service in software defined radio In order to obtain an appropriate, high level of security, a number of architectural elements for secure downloading of software to a software defined radio (SDR) terminal have been pointed out. They include four different cryptographic techniques and employment of tamper resistant hardware. The cryptographic techniques employed are: (a) a secret key encryption technique; (b) a public key encryption technique; (c) a technique for cryptographic hashing and (d) a technique for digital signature. Particularly, a protocol for exchanging cryptographic components in an automatic manner without any assistance from the user, is proposed. Implementation characteristics of certain cryptographic components are also discussed.
A 12.8 GS/s Time-Interleaved ADC With 25 GHz Effective Resolution Bandwidth and 4.6 ENOB This paper presents a 12.8 GS/s 32-way hierarchically time-interleaved SAR ADC with 4.6 ENOB in 65 nm CMOS. The prototype utilizes hierarchical sampling and cascode sampler circuits to enable greater than 25 GHz 3 dB effective resolution bandwidth (ERBW). We further employ a pseudo-differential SAR ADC to save power and area. The core circuit occupies only 0.23 mm 2 and consumes a total of 162 mW from dual 1.2 V/1.1 V supplies. The design achieves a SNDR of 29.4 dB at low frequencies and 26.4 dB at 25 GHz, resulting in a figure-of-merit of 0.79 pJ/conversion-step. As will be further described in the paper, the circuit architecture used in this prototype enables expansion to 25.6 GS/s or 51.2 GS/s via additional interleaving without significantly impacting ERBW.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.016109
0.022584
0.021735
0.018976
0.012388
0.008784
0.006752
0.002668
0.000103
0
0
0
0
0
A 1.1 G MAC/s Sub-Word-Parallel Digital Signal Processor for Wireless Communication Applications This work proposes a communication digital signal processor (DSP) suitable for massive signal processing operations in orthogonal frequency division multiplexing (OFDM) and code-division multiple-access (CDMA) communication systems. The OFDM-based IEEE 802.11a wireless LAN transceiver and CDMA-based WCDMA uplink receiver are simulated to evaluate the computation requirements of future communicatio...
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
A 0.5 V 1.1 MS/sec 6.3 fJ/Conversion-Step SAR-ADC With Tri-Level Comparator in 40 nm CMOS This paper presents an extremely low-voltage operation and power efficient successive-approximation-register (SAR) analog-to-digital converter (ADC). Tri-level comparator is proposed to relax the speed requirement of the comparator and decrease the resolution of internal Digital-to-Analog Converter (DAC) by 1-bit. The internal charge redistribution DAC employs unit capacitance of 0.5 fF and ADC operates at nearly thermal noise limitation. To deal with the problem of capacitor mismatch, reconfigurable capacitor array and calibration procedure were developed. The prototype ADC fabricated using 40 nm CMOS process achieves 46.8 dB SNDR and 58.2 dB SFDR with 1.1 MS/sec at 0.5 V power supply. The FoM is 6.3-fJ/conversion step and the chip die area is only 160 μm × 70 μm.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Shouji: A Fast and Efficient Pre-Alignment Filter for Sequence Alignment. Motivation: The ability to generate massive amounts of sequencing data continues to overwhelm the processing capability of existing algorithms and compute infrastructures. In this work, we explore the use of hardware/software co-design and hardware acceleration to significantly reduce the execution time of short sequence alignment, a crucial step in analyzing sequenced genomes. We introduce Shouji, a highly parallel and accurate pre-alignment filter that remarkably reduces the need for computationally-costly dynamic programming algorithms. The first key idea of our proposed pre-alignment filter is to provide high filtering accuracy by correctly detecting all common subsequences shared between two given sequences. The second key idea is to design a hardware accelerator that adopts modern field-programmable gate array (FPGA) architectures to further boost the performance of our algorithm. Results: Shouji significantly improves the accuracy of pre-alignment filtering by up to two orders of magnitude compared to the state-of-the-art pre-alignment filters, GateKeeper and SHD. Our FPGA-based accelerator is up to three orders of magnitude faster than the equivalent CPU implementation of Shouji. Using a single FPGA chip, we benchmark the benefits of integrating Shouji with five state-of-the-art sequence aligners, designed for different computing platforms. The addition of Shouji as a pre-alignment step reduces the execution time of the five state-of-the-art sequence aligners by up to 18.8x. Shouji can be adapted for any bioinformatics pipeline that performs sequence alignment for verification. Unlike most existing methods that aim to accelerate sequence alignment, Shouji does not sacrifice any of the aligner capabilities, as it does not modify or replace the alignment step.
SWIFOLD: Smith-Waterman implementation on FPGA with OpenCL for long DNA sequences. The results suggest that SWIFOLD can be a serious contender for accelerating the SW alignment of DNA sequences of unrestricted size in an affordable way reaching on average 125 GCUPS and almost a peak of 270 GCUPS.
GSWABE: faster GPU-accelerated sequence alignment with optimal alignment retrieval for short DNA sequences In this paper, we present GSWABE, a graphics processing unit GPU-accelerated pairwise sequence alignment algorithm for a collection of short DNA sequences. This algorithm supports all-to-all pairwise global, semi-global and local alignment, and retrieves optimal alignments on Compute Unified Device Architecture CUDA-enabled GPUs. All of the three alignment types are based on dynamic programming and share almost the same computational pattern. Thus, we have investigated a general tile-based approach to facilitating fast alignment by deeply exploring the powerful compute capability of CUDA-enabled GPUs. The performance of GSWABE has been evaluated on a Kepler-based Tesla K40 GPU using a variety of short DNA sequence datasets. The results show that our algorithm can yield a performance of up to 59.1 billions cell updates per second GCUPS, 58.5 GCUPS and 50.3 GCUPS for global, semi-global and local alignment, respectively. Furthermore, on the same system GSWABE runs up to 156.0 times faster than the Streaming SIMD Extensions SSE-based SSW library and up to 102.4 times faster than the CUDA-based MSA-CUDA the first stage in terms of local alignment. Compared with the CUDA-based gpu-pairAlign, GSWABE demonstrates stable and consistent speedups with a maximum speedup of 11.2, 10.7, and 10.6 for global, semi-global, and local alignment, respectively. Copyright © 2014 John Wiley & Sons, Ltd.
Emerging Trends in Design and Applications of Memory-Based Computing and Content-Addressable Memories Content-addressable memory (CAM) and associative memory (AM) are types of storage structures that allow searching by content as opposed to searching by address. Such memory structures are used in diverse applications ranging from branch prediction in a processor to complex pattern recognition. In this paper, we review the emerging challenges and opportunities in implementing different varieties of...
FindeR: Accelerating FM-Index-Based Exact Pattern Matching in Genomic Sequences through ReRAM Technology Genomics is the critical key to enabling precision medicine, ensuring global food security and enforcing wildlife conservation. The massive genomic data produced by various genome sequencing technologies presents a significant challenge for genome analysis. Because of errors from sequencing machines and genetic variations, approximate pattern matching (APM) is a must for practical genome analysis. Recent work proposes FPGA, ASIC and even process-in-memory-based accelerators to boost the APM throughput by accelerating dynamic-programming-based algorithms (e.g., Smith-Waterman). However, existing accelerators lack the efficient hardware acceleration for the exact pattern matching (EPM) that is an even more critical and essential function widely used in almost every step of genome analysis including assembly, alignment, annotation and compression. State-of-the-art genome analysis adopts the FM-Index that augments the space-efficient BWT with additional data structures permitting fast EPM operations. But the FM-Index is notorious for poor spatial locality and massive random memory accesses. In this paper, we propose a ReRAM-based process-in-memory architecture, FindeR, to enhance the FM-Index EPM search throughput in genomic sequences. We build a reliable and energy-efficient Hamming distance unit to accelerate the computing kernel of FM-Index search using commodity ReRAM chips without introducing extra CMOS logic. We further architect a full-fledged FM-Index search pipeline and improve its search throughput by lightweight scheduling on the NVDIMM. We also create a system library for programmers to invoke FindeR to perform EPMs in genome analysis. Compared to state-of-the-art accelerators, FindeR improves the FM-Index search throughput by 83% ~ 30K× and throughput per Watt by 3.5×~42.5K×.
SeGraM: a universal hardware accelerator for genomic sequence-to-graph and sequence-to-sequence mapping A critical step of genome sequence analysis is the mapping of sequenced DNA fragments (i.e., reads) collected from an individual to a known linear reference genome sequence (i.e., sequence-to-sequence mapping). Recent works replace the linear reference sequence with a graph-based representation of the reference genome, which captures the genetic variations and diversity across many individuals in a population. Mapping reads to the graph-based reference genome (i.e., sequence-to-graph mapping) results in notable quality improvements in genome analysis. Unfortunately, while sequence-to-sequence mapping is well studied with many available tools and accelerators, sequence-to-graph mapping is a more difficult computational problem, with a much smaller number of practical software tools currently available. We analyze two state-of-the-art sequence-to-graph mapping tools and reveal four key issues. We find that there is a pressing need to have a specialized, high-performance, scalable, and low-cost algorithm/hardware co-design that alleviates bottlenecks in both the seeding and alignment steps of sequence-to-graph mapping. Since sequence-to-sequence mapping can be treated as a special case of sequence-to-graph mapping, we aim to design an accelerator that is efficient for both linear and graph-based read mapping. To this end, we propose SeGraM, a universal algorithm/hardware co-designed genomic mapping accelerator that can effectively and efficiently support both <u>se</u>quence-to-<u>gra</u>ph <u>m</u>apping and sequence-to-sequence mapping, for both short and long reads. To our knowledge, SeGraM is the first algorithm/hardware co-design for accelerating sequence-to-graph mapping. SeGraM consists of two main components: (1) MinSeed, the first <u>min</u>imizer-based <u>seed</u>ing accelerator, which finds the candidate locations in a given genome graph; and (2) BitAlign, the first <u>bit</u>vector-based sequence-to-graph <u>align</u>ment accelerator, which performs alignment between a given read and the subgraph identified by MinSeed. We couple SeGraM with high-bandwidth memory to exploit low latency and highly-parallel memory access, which alleviates the memory bottleneck. We demonstrate that SeGraM provides significant improvements for multiple steps of the sequence-to-graph (i.e., S2G) and sequence-to-sequence (i.e., S2S) mapping pipelines. First, SeGraM outperforms state-of-the-art S2G mapping tools by 5.9×/3.9× and 106×/- 742× for long and short reads, respectively, while reducing power consumption by 4.1×/4.4× and 3.0×/3.2×. Second, BitAlign outperforms a state-of-the-art S2G alignment tool by 41×-539× and three S2S alignment accelerators by 1.2×-4.8×. We conclude that SeGraM is a high-performance and low-cost universal genomics mapping accelerator that efficiently supports both sequence-to-graph and sequence-to-sequence mapping pipelines.
An FPGA Implementation of A Portable DNA Sequencing Device Based on RISC-V Miniature and mobile DNA sequencers are steadily growing in popularity as effective tools for genetics research. As basecalling algorithms continue to evolve, basecalling poses a serious challenge for small computing devices despite its increasing accuracy. Although general-purpose computing chips such as CPUs and GPUs can achieve fast results, they are not energy efficient enough for mobile applications. This paper presents an innovative solution, a basecalling hardware architecture based on RISC-V ISA, and after validation with our custom FPGA verification platform, it demonstrates a 1.95x energy efficiency ratio compared to x86. There is also a 38% improvement in energy efficiency ratio compared to ARM. In addition, this study also completes the verification work for subsequent ASIC designs.
Accelerating read mapping with FastHASH. With the introduction of next-generation sequencing (NGS) technologies, we are facing an exponential increase in the amount of genomic sequence data. The success of all medical and genetic applications of next-generation sequencing critically depends on the existence of computational techniques that can process and analyze the enormous amount of sequence data quickly and accurately. Unfortunately, the current read mapping algorithms have difficulties in coping with the massive amounts of data generated by NGS.We propose a new algorithm, FastHASH, which drastically improves the performance of the seed-and-extend type hash table based read mapping algorithms, while maintaining the high sensitivity and comprehensiveness of such methods. FastHASH is a generic algorithm compatible with all seed-and-extend class read mapping algorithms. It introduces two main techniques, namely Adjacency Filtering, and Cheap K-mer Selection.We implemented FastHASH and merged it into the codebase of the popular read mapping program, mrFAST. Depending on the edit distance cutoffs, we observed up to 19-fold speedup while still maintaining 100% sensitivity and high comprehensiveness.
A Linear Representation of Dynamics of Boolean Networks A new matrix product, called semi-tensor product of matrices, is reviewed. Using it, a matrix expression of logic is proposed, where a logical variable is expressed as a vector, a logical function is expressed as a multiple linear mapping. Under this framework, a Boolean network equation is converted into an equivalent algebraic form as a conventional discrete-time linear system. Analyzing the transition matrix of the linear system, formulas are obtained to show a) the number of fixed points; b) the numbers of cycles of different lengths; c) transient period, for all points to enter the set of attractors; and d) basin of each attractor. The corresponding algorithms are developed and used to some examples.
The Transitive Reduction of a Directed Graph
A new concept for wireless reconfigurable receivers In this article we present the Self-Adaptive Universal Receiver (SAUR), a novel wireless reconfigurable receiver architecture. This scheme is based on blind recognition of the system in use, operating on a new radio interface comprising two functional phases. The first phase performs a wideband analysis (WBA) on the received signal to determine its standard. The second phase corresponds to demodulation. Here we only focus on the WBA phase, which consists of an iterative process to find the bandwidth compatible with the associated signal processing techniques. The blind standard recognition performed in the last iteration step of this process uses radial basis function neural networks. This allows a strong analogy between our approach and conventional pattern recognition problems. The efficiency of this type of blind recognition is illustrated with the results of extensive simulations performed in our laboratory using true data of received signals.
Fpga Implementation Of High-Frequency Software Radio Receiver State-of-the-art analog-to-digital converters allow the design of high-frequency software radio receivers that use baseband signal processing. However, such receivers are rarely considered in literature. In this paper, we describe the design of a high-performance receiver operating at high frequencies, whose digital part is entirely implemented in an FPGA device. The design of digital subsystem is given, together with the design of a low-cost analog front end.
A Hybrid Dynamic Load Balancing Algorithm For Distributed Systems Using Genetic Algorithms Dynamic Load Balancing (DLB) is sine qua non in modern distributed systems to ensure the efficient utilization of computing resources therein. This paper proposes a novel framework for hybrid dynamic load balancing. Its framework uses a Genetic Algorithms (GA) based supernode selection approach within. The GA-based approach is useful in choosing optimally loaded nodes as the supernodes directly from data set, thereby essentially improving the speed of load balancing process. Applying the proposed GA-based approach, this work analyzes the performance of hybrid DLB algorithm under different system states such as lightly loaded, moderately loaded, and highly loaded. The performance is measured with respect to three parameters: average response time, average round trip time, and average completion time of the users. Further, it also evaluates the performance of hybrid algorithm utilizing OnLine Transaction Processing (OLTP) benchmark and Sparse Matrix Vector Multiplication (SPMV) benchmark applications to analyze its adaptability to I/O-intensive, memory-intensive, or/and CPU-intensive applications. The experimental results show that the hybrid algorithm significantly improves the performance under different system states and under a wide range of workloads compared to traditional decentralized algorithm.
OMNI: A Framework for Integrating Hardware and Software Optimizations for Sparse CNNs Convolution neural networks (CNNs) as one of today’s main flavor of deep learning techniques dominate in various image recognition tasks. As the model size of modern CNNs continues to grow, neural network compression techniques have been proposed to prune the redundant neurons and synapses. However, prior techniques disconnect the software neural networks compression and hardware acceleration, whi...
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0.05
0
0
0
0
0
0
ViA: A Novel Vision-Transformer Accelerator Based on FPGA Since Google proposed Transformer in 2017, it has made significant natural language processing (NLP) development. However, the increasing cost is a large amount of calculation and parameters. Previous researchers designed and proposed some accelerator structures for transformer models in field-programmable gate array (FPGA) to deal with NLP tasks efficiently. Now, the development of Transformer has also affected computer vision (CV) and has rapidly surpassed convolution neural networks (CNNs) in various image tasks. And there are apparent differences between the image data used in CV and the sequence data in NLP. The details in the models contained with transformer units in these two fields are also different. The difference in terms of data brings about the problem of the locality. The difference in the model structure brings about the problem of path dependence, which is not noticed in the existing related accelerator design. Therefore, in this work, we propose the ViA, a novel vision transformer (ViT) accelerator architecture based on FPGA, to execute the transformer application efficiently and avoid the cost of these challenges. By analyzing the data structure in the ViT, we design an appropriate partition strategy to reduce the impact of data locality in the image and improve the efficiency of computation and memory access. Meanwhile, by observing the computing flow of the ViT, we use the half-layer mapping and throughput analysis to reduce the impact of path dependence caused by the shortcut mechanism and fully utilize hardware resources to execute the Transformer efficiently. Based on optimization strategies, we design two reuse processing engines with the internal stream, different from the previous overlap or stream design patterns. In the stage of the experiment, we implement the ViA architecture in Xilinx Alveo U50 FPGA and finally achieved ~5.2 times improvement of energy efficiency compared with NVIDIA Tesla V100, and 4–10 times improvement of performance compared with related accelerators based on FPGA, that obtained nearly 309.6 GOP/s computing performance in the peek.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Performance Improvement by Prioritizing the Issue of the Instructions in Unconfident Branch Slices. Single-thread performance has hardly improved for more than a decade. One of the largest problems for performance improvements is branch misprediction. There are two approaches to reduce the penalty caused by this. One is to reduce the frequency of misprediction, and the other is to reduce the cycles consumed because of misprediction. Improving branch predictors is the former approach, and many studies on this topic have been done for several decades. However, the latter approach has been rarely studied. The present paper hence explores the latter approach. The cycles consumed because of misprediction are divided into the following two parts. The first part is the state recovery penalty, which consists of cycles consumed for rolling back the processor state. The second part is the misspeculation penalty, which are cycles consumed during useless speculative execution from the fetch of a mispredicted branch until the completion of the branch execution. We focus on reducing the misspeculation penalty. For this, we propose a scheme called PUBS, which allows the instructions in unconfident branch slices to be issued with highest priority from the issue queue (IQ). Here, a branch slice is a set consisting of a branch and the instructions this branch directly or indirectly depends on, and we call the branch slice unconfident if the associated branch prediction cannot be sufficiently trusted. By issuing instructions in unconfident branch slices as early as possible, the wait cycles of these instructions in the IQ are minimized and thus the misspeculation penalty is minimized. Our evaluation results using SPEC2006 benchmark programs show that the PUBS scheme improves the performance of the programs with difficult branch prediction by 7.8% on average (a maximum of 19.2%) using only 4.0KB hardware cost.
Threaded code The concept of “threaded code” is presented as an alternative to machine language code. Hardware and software realizations of it are given. In software it is realized as interpretive code not needing an interpreter. Extensions and optimizations are mentioned.
SimpleScalar: An Infrastructure for Computer System Modeling Designers can execute programs on software models to validate a proposed hardware design's performance and correctness, while programmerscan use these models to develop and test software before the real hardwarebecomes available. Three critical requirements drive the implementationof a software model: performance, flexibility, and detail.Performance determines the amount of workload the model can exercise given the machine resources available for simulation. Flexibility indicates how well the model is structured to simplify modification, permitting design variants or even completely different designs to be modeled with ease. Detail defines the level of abstraction used to implement the model's components.The SimpleScalar tool set provides an infrastructure for simulation and architectural modeling. It can model a variety of platforms ranging from simple unpipelined processors to detailed dynamically scheduled microarchitectures with multiple-level memory hierarchies. SimpleScalar simulators reproduce computing device operations by executing all program instructions using an interpreter.The tool set's instruction inter-complex modern machines and effectively manage the large software projects needed to model such machines. Asim addresses these needs by providing a modular and reusable framework for creating many models. The framework's modularity helps break down the performance-modeling problem into individual pieces that can be modeled separately, while its reusability allows using a software component repeatedly in different contexts.
An approach to testing specifications An approach to testing the consistency of specifications is explored, which is applicable to the design validation of communication protocols and other cases of step-wise refinement. In this approach, a testing module compares a trace of interactions obtained from an execution of the refined specification (e. g. the protocol specification) with the reference specification (e. g. the communication service specification). Non-determinism in reference specifications presents certain problems. Using an extended finite state transition model for the specifications, a strategy for limiting the amount of non-determinacy is presented. An automated method for constructing a testing module for a given reference specification is discussed. Experience with the application of this testing approach to the design of a Transport protocol and a distributed mutual exclusion algorithm is described.
Scientific benchmarking of parallel computing systems: twelve ways to tell the masses when reporting performance results Measuring and reporting performance of parallel computers constitutes the basis for scientific advancement of high-performance computing (HPC). Most scientific reports show performance improvements of new techniques and are thus obliged to ensure reproducibility or at least interpretability. Our investigation of a stratified sample of 120 papers across three top conferences in the field shows that the state of the practice is lacking. For example, it is often unclear if reported improvements are deterministic or observed by chance. In addition to distilling best practices from existing work, we propose statistically sound analysis and reporting techniques and simple guidelines for experimental design in parallel computing and codify them in a portable benchmarking library. We aim to improve the standards of reporting research results and initiate a discussion in the HPC field. A wide adoption of our minimal set of rules will lead to better interpretability of performance results and improve the scientific culture in HPC.
Golden Gate: Bridging The Resource-Efficiency Gap Between ASICs and FPGA Prototypes We present Golden Gate, an FPGA-based simulation tool that decouples the timing of an FPGA host platform from that of the target RTL design. In contrast to previous work in static time-multiplexing of FPGA resources, Golden Gate employs the Latency-Insensitive Bounded Dataflow Network (LI-BDN) formalism to decompose the simulator into subcomponents, each of which may be independently and automatically optimized. This structure allows Golden Gate to support a broad class of optimizations that improve resource utilization by implementing FPGA-hostile structures over multiple cycles, while the LI-BDN formalism ensures that the simulator still produces bit- and cycle-exact results. To verify that these optimizations are implemented correctly, we also present LIME, a model-checking tool that provides a push-button flow for checking whether optimized subcomponents adhere to an associated correctness specification, while also guaranteeing forward progress. Finally, we use Golden Gate to generate a cycle-exact simulator of a multi-core SoC, where we reduce LUT utilization by up to 26% by coercing multi-ported, combinationally read memories into simulation models backed by time-multiplexed block RAMs, enabling us to simulate 50% more cores on a single FPGA.
Hardware Design with a Scripting Language The Python Hardware Description Language (PyHDL) provides a scripting interface to object-oriented hardware design in C++. PyHDL uses the PamDC and PAM-Blox libraries to generate FPGA circuits. The main advantage of scripting languages is a reduction in development time for high-level designs. We propose a two-step approach: first, use scripting to explore effects of composition and parameterisation; second, convert the scripted designs into compiled components for performance. Our results show that, for small designs, our method offers 5 to 7 times improvement in turnaround time. For a large 10x10 matrix vector multiplier, our method offers respectively 365% and 19% improvement in turnaround time over purely scripting and purely compiled methods.
Randomized algorithms This text by two well-known experts in the field presents the basic concepts in the design and analysis of randomized algorithms at a level accessible to beginning graduate students, professionals and researchers.
Building efficient wireless sensor networks with low-level naming In most distributed systems, naming of nodes for low-level communication leverages topological location (such as node addresses) and is independent of any application. In this paper, we investigate an emerging class of distributed systems where low-level communication does not rely on network topological location. Rather, low-level communication is based on attributes that are external to the network topology and relevant to the application. When combined with dense deployment of nodes, this kind of named data enables in-network processing for data aggregation, collaborative signal processing, and similar problems. These approaches are essential for emerging applications such as sensor networks where resources such as bandwidth and energy are limited. This paper is the first description of the software architecture that supports named data and in-network processing in an operational, multi-application sensor-network. We show that approaches such as in-network aggregation and nested queries can significantly affect network traffic. In one experiment aggregation reduces traffic by up to 42% and nested queries reduce loss rates by 30%. Although aggregation has been previously studied in simulation, this paper demonstrates nested queries as another form of in-network processing, and it presents the first evaluation of these approaches over an operational testbed.
On the evolution of user interaction in Facebook Online social networks have become extremely popular; numerous sites allow users to interact and share content using social links. Users of these networks often establish hundreds to even thousands of social links with other users. Recently, researchers have suggested examining the activity network - a network that is based on the actual interaction between users, rather than mere friendship - to distinguish between strong and weak links. While initial studies have led to insights on how an activity network is structurally different from the social network itself, a natural and important aspect of the activity network has been disregarded: the fact that over time social links can grow stronger or weaker. In this paper, we study the evolution of activity between users in the Facebook social network to capture this notion. We find that links in the activity network tend to come and go rapidly over time, and the strength of ties exhibits a general decreasing trend of activity as the social network link ages. For example, only 30% of Facebook user pairs interact consistently from one month to the next. Interestingly, we also find that even though the links of the activity network change rapidly over time, many graph-theoretic properties of the activity network remain unchanged.
An artificial neural network (p,d,q) model for timeseries forecasting Artificial neural networks (ANNs) are flexible computing frameworks and universal approximators that can be applied to a wide range of time series forecasting problems with a high degree of accuracy. However, despite all advantages cited for artificial neural networks, their performance for some real time series is not satisfactory. Improving forecasting especially time series forecasting accuracy is an important yet often difficult task facing forecasters. Both theoretical and empirical findings have indicated that integration of different models can be an effective way of improving upon their predictive performance, especially when the models in the ensemble are quite different. In this paper, a novel hybrid model of artificial neural networks is proposed using auto-regressive integrated moving average (ARIMA) models in order to yield a more accurate forecasting model than artificial neural networks. The empirical results with three well-known real data sets indicate that the proposed model can be an effective way to improve forecasting accuracy achieved by artificial neural networks. Therefore, it can be used as an appropriate alternative model for forecasting task, especially when higher forecasting accuracy is needed.
Efficiency of a Regenerative Direct-Drive Electromagnetic Active Suspension. The efficiency and power consumption of a direct-drive electromagnetic active suspension system for automotive applications are investigated. A McPherson suspension system is considered, where the strut consists of a direct-drive brushless tubular permanent-magnet actuator in parallel with a passive spring and damper. This suspension system can both deliver active forces and regenerate power due to imposed movements. A linear quadratic regulator controller is developed for the improvement of comfort and handling (dynamic tire load). The power consumption is simulated as a function of the passive damping in the active suspension system. Finally, measurements are performed on a quarter-car test setup to validate the analysis and simulations.
The real-time segmentation of indoor scene based on RGB-D sensor The vision system of the mobile robot is a low-level function that provides the required target information of the current environment for the upper vision tasks. The real-time performance and robustness of object segmentation in cluttered environments is still a serious problem in robot visions. In this paper, a new real-time indoor scene segmentation method based on RGB-D image, is presented and the extracted primary object regions are then used for object recognition. Firstly, this paper accomplishes the depth filtering by the improved traditional filtering method. Then by using improved depth information, the algorithm extracts the foreground and implements the object segmentation of color image at the resolution of 640×480 from Kinect camera. Finally, the segmentation results are applied into the object recognition in indoor scene to validate the effectiveness of scene segmentation. The results of indoor segmentation demonstrate the real-time performance and robustness of the proposed method. In addition, the results of segmentation improve the accuracy of object recognition and reduce time of object recognition in indoor cluttered scene.
A 0.5 V 10-bit 3 MS/s SAR ADC With Adaptive-Reset Switching Scheme and Near-Threshold Voltage-Optimized Design Technique This brief presents a 10-bit ultra-low power energy-efficient successive approximation register (SAR) analog-to-digital converter (ADC). A new adaptive-reset switching scheme is proposed to reduce the switching energy of the capacitive digital-to-analog converter (CDAC). The proposed adaptive-reset switching scheme reduces the average switching energy of the CDAC by 90% compared to the conventional scheme without the common-mode voltage variation. In addition, the near-threshold voltage (NTV)-optimized digital library is adopted to alleviate the performance degradation in the ultra-low supply voltage while simultaneously increasing the energy efficiency. The NTV-optimized design technique is also introduced to the bootstrapped switch design to improve the linearity of the sample-and-hold circuit. The test chip is fabricated in a 65 nm CMOS, and its core area is 0.022 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> . At a supply of 0.5 V and sampling speed of 3 MS/s, the SAR ADC achieves an ENOB of 8.78 bit and consumes <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$3.09~{\boldsymbol{\mu }}\text{W}$ </tex-math></inline-formula> . The resultant Walden figure-of-merit (FoM) is 2.34 fJ/conv.-step.
1.2
0.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
Dedup Est Machina: Memory Deduplication as an Advanced Exploitation Vector Memory deduplication, a well-known technique to reduce the memory footprint across virtual machines, is now also a default-on feature inside the Windows 8.1 and Windows 10 operating systems. Deduplication maps multiple identical copies of a physical page onto a single shared copy with copy-on-write semantics. As a result, a write to such a shared page triggers a page fault and is thus measurably slower than a write to a normal page. Prior work has shown that an attacker able to craft pages on the target system can use this timing difference as a simple single-bit side channel to discover that certain pages exist in the system. In this paper, we demonstrate that the deduplication side channel is much more powerful than previously assumed, potentially providing an attacker with a weird machine to read arbitrary data in the system. We first show that an attacker controlling the alignment and reuse of data in memory is able to perform byte-by-byte disclosure of sensitive data (such as randomized 64 bit pointers). Next, even without control over data alignment or reuse, we show that an attacker can still disclose high-entropy randomized pointers using a birthday attack. To show these primitives are practical, we present an end-to-end JavaScript-based attack against the new Microsoft Edge browser, in absence of software bugs and with all defenses turned on. Our attack combines our deduplication-based primitives with a reliable Rowhammer exploit to gain arbitrary memory read and write access in the browser. We conclude by extending our JavaScript-based attack to cross-process system-wide exploitation (using the popular nginx web server as an example) and discussing mitigation strategies.
Secure Page Fusion with VUsion: https: //www.vusec.net/projects/VUsion. To reduce memory pressure, modern operating systems and hypervisors such as Linux/KVM deploy page-level memory fusion to merge physical memory pages with the same content (i.e., page fusion). A write to a fused memory page triggers a copy-on-write event that unmerges the page to preserve correct semantics. While page fusion is crucial in saving memory in production, recent work shows significant security weaknesses in its current implementations. Attackers can abuse timing side channels on the unmerge operation to leak sensitive data such as randomized pointers. Additionally, they can exploit the predictability of the merge operation to massage physical memory for reliable Rowhammer attacks. In this paper, we present VUsion, a secure page fusion system. VUsion can stop all the existing and even new classes of attack, where attackers leak information by side-channeling the merge operation or massage physical memory via predictable memory reuse patterns. To mitigate information disclosure attacks, we ensure attackers can no longer distinguish between fused and non-fused pages. To mitigate memory massaging attacks, we ensure fused pages are always allocated from a high-entropy pool. Despite its secure design, our comprehensive evaluation shows that VUsion retains most of the memory saving benefits of traditional memory fusion with negligible performance overhead while maintaining compatibility with other advanced memory management features.
Grand Pwning Unit: Accelerating Microarchitectural Attacks with the GPU Dark silicon is pushing processor vendors to add more specialized units such as accelerators to commodity processor chips. Unfortunately this is done without enough care to security. In this paper we look at the security implications of integrated Graphical Processor Units (GPUs) found in almost all mobile processors. We demonstrate that GPUs, already widely employed to accelerate a variety of benign applications such as image rendering, can also be used to "accelerate" microarchitectural attacks (i.e., making them more effective) on commodity platforms. In particular, we show that an attacker can build all the necessary primitives for performing effective GPU-based microarchitectural attacks and that these primitives are all exposed to the web through standardized browser extensions, allowing side-channel and Rowhammer attacks from JavaScript. These attacks bypass state-of-the-art mitigations and advance existing CPU-based attacks: we show the first end-to-end microarchitectural compromise of a browser running on a mobile phone in under two minutes by orchestrating our GPU primitives. While powerful, these GPU primitives are not easy to implement due to undocumented hardware features. We describe novel reverse engineering techniques for peeking into the previously unknown cache architecture and replacement policy of the Adreno 330, an integrated GPU found in many common mobile platforms. This information is necessary when building shader programs implementing our GPU primitives. We conclude by discussing mitigations against GPU-enabled attackers.
Architectural Support for Dynamic Linking All software in use today relies on libraries, including standard libraries (e.g., C, C++) and application-specific libraries (e.g., libxml, libpng). Most libraries are loaded in memory and dynamically linked when programs are launched, resolving symbol addresses across the applications and libraries. Dynamic linking has many benefits: It allows code to be reused between applications, conserves memory (because only one copy of a library is kept in memory for all the applications that share it), and allows libraries to be patched and updated without modifying programs, among numerous other benefits. However, these benefits come at the cost of performance. For every call made to a function in a dynamically linked library, a trampoline is used to read the function address from a lookup table and branch to the function, incurring memory load and branch operations. Static linking avoids this performance penalty, but loses all the benefits of dynamic linking. Given its myriad benefits, dynamic linking is the predominant choice today, despite the performance cost. In this work, we propose a speculative hardware mechanism to optimize dynamic linking by avoiding executing the trampolines for library function calls, providing the benefits of dynamic linking with the performance of static linking. Speculatively skipping the memory load and branch operations of the library call trampolines improves performance by reducing the number of executed instructions and gains additional performance by reducing pressure on the instruction and data caches, TLBs, and branch predictors. Because the indirect targets of library call trampolines do not change during program execution, our speculative mechanism never misspeculates in practice. We evaluate our technique on real hardware with production software and observe up to 4% speedup using only 1.5KB of on-chip storage.
Fooling the Sense of Cross-Core Last-Level Cache Eviction Based Attacker by Prefetching Common Sense Cross-core last-level cache (LLC) eviction based side-channel attacks are becoming practical because of the inclusive nature of shared resources (e.g., an inclusive LLC), that creates back-invalidation-hits at the private caches. Most of the cross-core eviction based side-channel attack strategies exploit the same for a successful attack. The fundamental principle behind all the cross-core eviction attack strategies is that the attacker can observe LLC access time differences (in terms of latency differences between events such as hits/misses) to infer about the data used by the victim. We fool the attacker (by providing LLC hits to the addresses of interest) through a back-invalidation-hits triggered hardware prefetching technique (BITP). BITP is an L2 cache level hardware prefetcher that prefetches the back-invalidated block addresses and refills the LLC (along with the L2) before the attacker's observation/access, efficiently nullifying inferences due to differences in access latencies. We show that BITP can fool the attacker with various security metrics related to LLC side-channel. BITP provides zero probability of success in terms of attacker's probability of success for Evict+Time, Evict+Reload, and Prime+Probe attacks. We also show the effectiveness of BITP in terms of performance by simulating SPEC CPU 2006, PARSEC, and CloudSuite benchmarks and find that, on average, BITP improves system performance marginally by 1.1%. Overall, BITP is a simple, practical, and yet powerful technique in mitigating various cross-core LLC eviction-based side-channel attacks. Compared to the state-of-the-art policies, BITP does not require support from software writer, operating system (OS), and runtime systems. Overall, BITP provides marginal improvement in system performance, providing security with no hardware and performance overhead, which makes BITP readily-implementable.
WiDir: A Wireless-Enabled Directory Cache Coherence Protocol As the core count in shared-memory manycores keeps increasing, it is becoming increasingly harder to design cache-coherence protocols that deliver high performance without an inordinate increase in complexity and cost. In particular, sharing patterns where a group of cores frequently reads and writes a shared variable are hard to support efficiently. Hence, programmers end up tuning their applicat...
Systematic Analysis of Randomization-based Protected Cache Architectures Recent secure cache designs aim to mitigate side-channel attacks by randomizing the mapping from memory addresses to cache sets. As vendors investigate deployment of these caches, it is crucial to understand their actual security.In this paper, we consolidate existing randomization-based secure caches into a generic cache model. We then comprehensively analyze the security of existing designs, inc...
No Need to Hide: Protecting Safe Regions on Commodity Hardware. As modern 64-bit x86 processors no longer support the segmentation capabilities of their 32-bit predecessors, most research projects assume that strong in-process memory isolation is no longer an affordable option. Instead of strong, deterministic isolation, new defense systems therefore rely on the probabilistic pseudo-isolation provided by randomization to \"hide\" sensitive (or safe) regions. However, recent attacks have shown that such protection is insufficient; attackers can leak these safe regions in a variety of ways. In this paper, we revisit isolation for x86-64 and argue that hardware features enabling efficient deterministic isolation do exist. We first present a comprehensive study on commodity hardware features that can be repurposed to isolate safe regions in the same address space (e.g., Intel MPX and MPK). We then introduce MemSentry, a framework to harden modern defense systems with commodity hardware features instead of information hiding. Our results show that some hardware features are more effective than others in hardening such defenses in each scenario and that features originally conceived for other purposes (e.g., Intel MPX for bounds checking) are surprisingly efficient at isolating safe regions compared to their software equivalent (i.e., SFI).
On the capacity of thermal covert channels in multicores. Modern multicore processors feature easily accessible temperature sensors that provide useful information for dynamic thermal management. These sensors were recently shown to be a potential security threat, since otherwise isolated applications can exploit them to establish a thermal covert channel and leak restricted information. Previous research showed experiments that document the feasibility of (low-rate) communication over this channel, but did not further analyze its fundamental characteristics. For this reason, the important questions of quantifying the channel capacity and achievable rates remain unanswered. To address these questions, we devise and exploit a new methodology that leverages both theoretical results from information theory and experimental data to study these thermal covert channels on modern multicores. We use spectral techniques to analyze data from two representative platforms and estimate the capacity of the channels from a source application to temperature sensors on the same or different cores. We estimate the capacity to be in the order of 300 bits per second (bps) for the same-core channel, i.e., when reading the temperature on the same core where the source application runs, and in the order of 50 bps for the 1-hop channel, i.e., when reading the temperature of the core physically next to the one where the source application runs. Moreover, we show a communication scheme that achieves rates of more than 45 bps on the same-core channel and more than 5 bps on the 1-hop channel, with less than 1% error probability. The highest rate shown in previous work was 1.33 bps on the 1-hop channel with 11% error probability.
NNPIM: A Processing In-Memory Architecture for Neural Network Acceleration Neural networks (NNs) have shown great ability to process emerging applications such as speech recognition, language recognition, image classification, video segmentation, and gaming. It is therefore important to make NNs efficient. Although attempts have been made to improve NNs’ computation cost, the data movement between memory and processing cores is the main bottleneck for NNs’ energy consumption and execution time. This makes the implementation of NNs significantly slower on traditional CPU/GPU cores. In this paper, we propose a novel processing in-memory architecture, called NNPIM, that significantly accelerates neural network's inference phase inside the memory. First, we design a crossbar memory architecture that supports fast addition, multiplication, and search operations inside the memory. Second, we introduce simple optimization techniques which significantly improves NNs’ performance and reduces the overall energy consumption. We also map all NN functionalities using parallel in-memory components. To further improve the efficiency, our design supports weight sharing to reduce the number of computations in memory and consecutively speedup NNPIM computation. We compare the efficiency of our proposed NNPIM with GPU and the state-of-the-art PIM architectures. Our evaluation shows that our design can achieve 131.5× higher energy efficiency and is 48.2× faster as compared to NVIDIA GTX 1,080 GPU architecture. Compared to state-of-the-art neural network accelerators, NNPIM can achieve on an average 3.6× higher energy efficiency and is 4.6× faster, while providing the same classification accuracy.
An asynchronous leader election algorithm for dynamic networks An algorithm for electing a leader in an asynchronous network with dynamically changing communication topology is presented. The algorithm ensures that, no matter what pattern of topology changes occur, if topology changes cease, then eventually every connected component contains a unique leader. The algorithm combines ideas from the Temporally Ordered Routing Algorithm (TORA) for mobile ad hoc networks [16] with a wave algorithm [21], all within the framework of a height-based mechanism for reversing the logical direction of communication links [6]. It is proved that in certain well-behaved situations, a new leader is not elected unnecessarily.
Stability analysis and performance design for fuzzy-model-based control system under imperfect premise matching This paper investigates the stability analysis and performance design of nonlinear systems. To facilitate the stability analysis, the Takagi-Sugeno (T-S) fuzzy model is employed to represent the nonlinear plant. Under the imperfect premise matching in which T-S fuzzy model and fuzzy controller do not share the same membership functions, a fuzzy controller with enhanced design flexibility and robustness property is proposed to control the nonlinear plant. However, the nice characteristic given by the perfect premise matching, leading to conservative stability conditions, vanishes. In this paper, under the imperfect premise matching, information of membership functions of the fuzzy model and controller are considered in stability analysis. With the introduction of slack matrices, relaxed linear matrix inequality (LMI)-based stability conditions are derived using Lyapunov-based approach. Furthermore, LMI-based performance conditions are provided to guarantee system performance. Simulation examples are given to illustrate the effectiveness of the proposed approach.
Memristor Crossbar-Based Neuromorphic Computing System: A Case Study. By mimicking the highly parallel biological systems, neuromorphic hardware provides the capability of information processing within a compact and energy-efficient platform. However, traditional Von Neumann architecture and the limited signal connections have severely constrained the scalability and performance of such hardware implementations. Recently, many research efforts have been investigated...
A Data-Compressive Wired-OR Readout for Massively Parallel Neural Recording. Neural interfaces of the future will be used to help restore lost sensory, motor, and other capabilities. However, realizing this futuristic promise requires a major leap forward in how electronic devices interface with the nervous system. Next generation neural interfaces must support parallel recording from tens of thousands of electrodes within the form factor and power budget of a fully implan...
1.01992
0.019653
0.017209
0.016667
0.016667
0.016667
0.005556
0.002685
0.000595
0.000007
0
0
0
0
Joint mismatch and channel compensation for high-speed OFDM receivers with time-interleaved ADCs Analog-to-digital converters (ADCs) with high sampling rates and output resolution are required for the design of mostly digital transceivers in emerging multi-Gigabit communication systems. A promising approach is to use a time-interleaved (TI) architecture with slower sub-ADCs in parallel, but mismatch among the sub-ADCs, if left uncompensated, can cause error floors in receiver performance. Conventional mismatch compensation schemes typically have complexity (in terms of number of multiplications) that increases with the desired resolution at the output of the TI-ADC. In this paper, we investigate an alternative approach, in which mismatch and channel dispersion are compensated jointly, with the performance metric being overall link reliability rather than ADC performance. For an OFDM system, we characterize the structure of mismatch-induced interference, and demonstrate the efficacy of a frequency-domain interference suppression scheme whose complexity is independent of constellation size (which determines the desired resolution). Numerical results from computer simulation and from experiments on a hardware prototype show that the performance with the proposed joint mismatch and channel compensation technique is close to that without mismatch. While the proposed technique works with offline estimates of mismatch parameters, we provide an iterative, online method for joint estimation of mismatch and channel parameters which leverages the training overhead already available in communication signals.
Achieving Full Frequency and Space Diversity in Wireless Systems via BICM, OFDM, STBC, and Viterbi Decoding Orthogonal frequency-division multiplexing (OFDM) is known as an efficient technique to combat frequency-selective channels. In this paper, we show that the combination of bit-interleaved coded modulation (BICM) and OFDM achieves the full frequency diversity offered by a frequency-selective channel with any kind of power delay profile (PDP), conditioned on the minimum Hamming distance d(free) of the convolutional code. This system has a simple Viterbi decoder with a modified metric. We then show that by combining such a system with space-time block coding (STBC), one can achieve the full space and frequency diversity of a frequency-selective channel with N transmit and M receive antennas. BICM-STBC-OFDM achieves the maximum diversity order of NAIL over L-tap frequency-selective channels regardless of the PDP of the channel. This latter system also has a simple Viterbi decoder with a properly modified metric. We verify our analytical results via simulations, including channels employed in the IEEE 802.11 standards.
Effect of Offset Mismatch in Time-Interleaved ADC Circuits on OFDM-BER Performance. This paper analyzes the effect of the offset mismatch in time-interleaved analog-to-digital converter (TI-ADC) circuits on bit error rate (BER) performance of a receiver for pulse amplitude-modulated or quadrature amplitude-modulated signals in orthogonal frequency division multiplexed systems. Exact BER expressions, as well as simplified BER expressions that hold for high signal-to-noise ratios (...
A Digital Timing Mismatch Calibration Technique in Time-Interleaved ADCs. A digital calibration scheme is proposed to minimize the timing mismatch in time-interleaved analog-to-digital converters (TIADCs). First, the scheme is to subtract the outputs from adjacent channel ADCs and to utilize the expectations of the absolute value of the subtracted results to represent the actual sampling time interval. The timing mismatch is recognized by comparing these expectations. T...
All-Digital Calibration of Timing Mismatch Error in Time-Interleaved Analog-to-Digital Converters. This paper presents an all-digital background calibration for timing mismatch in time-interleaved analog-to-digital converters (TI-ADCs). It combines digital adaptive timing mismatch estimation and digital derivative-based correction, achieving lower hardware cost and better suppression of timing mismatch tones than previous work. In addition, for the first time closed-form exact expressions for t...
A new adaptive blind background calibration of gain and timing mismatch for a two-channel time-interleaved ADC. This study investigates a new adaptive blind calibration structure for the calibration of gain and timing mismatch error, which is applied for a two-channel Time-Interleaved Analog-to-Digital Converter (TI-ADC). This technique calibrates the mismatch error in the output of the converter without any interrupt at the typical performance of the TI-ADC. The output of the time-interleaved ADC is approximated by modeling based on Taylor's series. The effect of gain and timing mismatch in the output is represented by the first order of Taylor's series. By using a Least-Mean Square (LMS) and correlation-based algorithm, the approximate coefficients of the Taylor's series are identified. Input signal and its chopped image or input signal and its chopped and delayed image will be correlated in the proposed algorithm. This algorithm can identify the coefficients blindly. It is simpler than the previous similar techniques. Also, this technique does not have the drawbacks of a technique which uses the Taylor's series approximation and filtered-X LMS algorithm. Simulation results confirm these claims and show that by entering six sinusoidal inputs in the whole bandwidth of the proposed technique, 48.2 dB improvement in Spur Free Dynamic Range (SFDR) is achieved.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
GPUWattch: enabling energy optimizations in GPGPUs General-purpose GPUs (GPGPUs) are becoming prevalent in mainstream computing, and performance per watt has emerged as a more crucial evaluation metric than peak performance. As such, GPU architects require robust tools that will enable them to quickly explore new ways to optimize GPGPUs for energy efficiency. We propose a new GPGPU power model that is configurable, capable of cycle-level calculations, and carefully validated against real hardware measurements. To achieve configurability, we use a bottom-up methodology and abstract parameters from the microarchitectural components as the model's inputs. We developed a rigorous suite of 80 microbenchmarks that we use to bound any modeling uncertainties and inaccuracies. The power model is comprehensively validated against measurements of two commercially available GPUs, and the measured error is within 9.9% and 13.4% for the two target GPUs (GTX 480 and Quadro FX5600). The model also accurately tracks the power consumption trend over time. We integrated the power model with the cycle-level simulator GPGPU-Sim and demonstrate the energy savings by utilizing dynamic voltage and frequency scaling (DVFS) and clock gating. Traditional DVFS reduces GPU energy consumption by 14.4% by leveraging within-kernel runtime variations. More finer-grained SM cluster-level DVFS improves the energy savings from 6.6% to 13.6% for those benchmarks that show clustered execution behavior. We also show that clock gating inactive lanes during divergence reduces dynamic power by 11.2%.
Disk Paxos We present an algorithm, called Disk Paxos, for implementing a reliable distributed system with a network of processors and disks. Like the original Paxos algorithm, Disk Paxos maintains consistency in the presence of arbitrary non-Byzantine faults. Progress can be guaranteed as long as a majority of the disks are available, even if all processors but one have failed.
Winnowing: local algorithms for document fingerprinting Digital content is for copying: quotation, revision, plagiarism, and file sharing all create copies. Document fingerprinting is concerned with accurately identifying copying, including small partial copies, within large sets of documents.We introduce the class of local document fingerprinting algorithms, which seems to capture an essential property of any finger-printing technique guaranteed to detect copies. We prove a novel lower bound on the performance of any local algorithm. We also develop winnowing, an efficient local fingerprinting algorithm, and show that winnowing's performance is within 33% of the lower bound. Finally, we also give experimental results on Web data, and report experience with MOSS, a widely-used plagiarism detection service.
Bundled execution of recurring traces for energy-efficient general purpose processing Technology scaling has delivered on its promises of increasing device density on a single chip. However, the voltage scaling trend has failed to keep up, introducing tight power constraints on manufactured parts. In such a scenario, there is a need to incorporate energy-efficient processing resources that can enable more computation within the same power budget. Energy efficiency solutions in the past have typically relied on application specific hardware and accelerators. Unfortunately, these approaches do not extend to general purpose applications due to their irregular and diverse code base. Towards this end, we propose BERET, an energy-efficient co-processor that can be configured to benefit a wide range of applications. Our approach identifies recurring instruction sequences as phases of "temporal regularity" in a program's execution, and maps suitable ones to the BERET hardware, a three-stage pipeline with a bundled execution model. This judicious off-loading of program execution to a reduced-complexity hardware demonstrates significant savings on instruction fetch, decode and register file accesses energy. On average, BERET reduces energy consumption by a factor of 3-4X for the program regions selected across a range of general-purpose and media applications. The average energy savings for the entire application run was 35% over a single-issue in-order processor.
A Dht-Based Discovery Service For The Internet Of Things Current trends towards the Future Internet are envisaging the conception of novel services endowed with context-aware and autonomic capabilities to improve end users' quality of life. The Internet of Things paradigm is expected to contribute towards this ambitious vision by proposing models and mechanisms enabling the creation of networks of "smart things" on a large scale. It is widely recognized that efficient mechanisms for discovering available resources and capabilities are required to realize such vision. The contribution of this work consists in a novel discovery service for the Internet of Things. The proposed solution adopts a peer-to-peer approach for guaranteeing scalability, robustness, and easy maintenance of the overall system. While most existing peer-to-peer discovery services proposed for the IoT support solely exact match queries on a single attribute (i. e., the object identifier), our solution can handle multiattribute and range queries. We defined a layered approach by distinguishing three main aspects: multiattribute indexing, range query support, peer-to-peer routing. We chose to adopt an over-DHT indexing scheme to guarantee ease of design and implementation principles. We report on the implementation of a Proof of Concept in a dangerous goods monitoring scenario, and, finally, we discuss test results for structural properties and query performance evaluation.
3.4 A 36Gb/s PAM4 transmitter using an 8b 18GS/S DAC in 28nm CMOS At data rates beyond 10Gb/s, most wireline links employ NRZ signaling. Serial NRZ links as high as 56Gb/s and 60Gb/s have been reported [1]. Nevertheless, as the rate increases, the constraints imposed by the channel, package, and die become more severe and do not benefit from process scaling in the same fashion that circuit design does. Reflections from impedance discontinuities in the PCB and package caused by vias and connectors introduce significant signal loss and distortions at higher frequencies. Even with an ideal channel, at every package-die interface, there is an intrinsic parasitic capacitance due to the pads and the ESD circuit amounting to at least 150fF, and a 50Ω resistor termination at both the transmit and receive ends resulting in an intrinsic pole at 23GHz or lower. In light of all these limitations, serial NRZ signaling beyond 60Gb/s appears suboptimal in terms of both power and performance. Utilizing various modulation techniques such as PAM4, one can achieve a higher spectral efficiency [2]. To enable such transmission formats, high-speed moderate-resolution data converters are required. This paper describes a 36Gb/s transmitter based on an 18GS/s 8b DAC implemented in 28nm CMOS, compliant to the new IEEE802.3bj standard for 100G Ethernet over backplane and copper cables [3].
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1.075556
0.066667
0.066667
0.038889
0.022222
0.006667
0
0
0
0
0
0
0
0
Advanced Control Architectures for Intelligent Microgrids—Part I: Decentralized and Hierarchical Control This paper presents a review of advanced control techniques for microgrids. This paper covers decentralized, distributed, and hierarchical control of grid-connected and islanded microgrids. At first, decentralized control techniques for microgrids are reviewed. Then, the recent developments in the stability analysis of decentralized controlled microgrids are discussed. Finally, hierarchical control for microgrids that mimic the behavior of the mains grid is reviewed.
Machine-to-machine communications for home energy management system in smart grid. Machine-to-machine (M2M) communications have emerged as a cutting edge technology for next-generation communications, and are undergoing rapid development and inspiring numerous applications. This article presents an investigation of the application of M2M communications in the smart grid. First, an overview of M2M communications is given. The enabling technologies and open research issues of M2M ...
Reduced-Order Model and Stability Analysis of Low-Voltage DC Microgrid. Depleting fossil fuels, increasing energy demand, and need for high-reliability power supply motivate the use of dc microgrids. This paper analyzes the stability of low-voltage dc microgrid systems. Sources are controlled using a droop-based decentralized controller. Various components of the system have been modeled. A linearized system model is derived using small-signal approximation. The stability of the system is analyzed by identifying the eigenvalues of the system matrix. The sufficiency condition for stable operation of the system is derived. It provides upper bound on droop constants and is useful during planning and designing of dc microgrids. Furthermore, the sensitivity of system poles to variation in cable resistance and inductance is identified. It is proved that the poles move further inside the negative real plane with a decrease in inductance or an increase in resistance. The method proposed in this paper is applicable to any interconnecting structure of sources and loads. The results obtained by analysis are verified by detailed simulation study. Root locus plots are included to confirm the movement of system poles. The viability of the model is confirmed by experimental results from a scaled-down laboratory prototype of a dc microgrid developed for the purpose.
High-Fidelity Model Order Reduction for Microgrids Stability Assessment. Proper modeling of inverter-based microgrids is crucial for accurate assessment of stability boundaries. It has been recently realized that the stability conditions for such microgrids are significantly different from those known for large-scale power systems. In particular, the network dynamics, despite its fast nature, appears to have major influence on stability of slower modes. While detailed ...
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
How to share a secret In this paper we show how to divide data D into n pieces in such a way that D is easily reconstructable from any k pieces, but even complete knowledge of k - 1 pieces reveals absolutely no information about D. This technique enables the construction of robust key management schemes for cryptographic systems that can function securely and reliably even when misfortunes destroy half the pieces and security breaches expose all but one of the remaining pieces.
A new approach to state observation of nonlinear systems with delayed output The article presents a new approach for the construction of a state observer for nonlinear systems when the output measurements are available for computations after a nonnegligible time delay. The proposed observer consists of a chain of observation algorithms reconstructing the system state at different delayed time instants (chain observer). Conditions are given for ensuring global exponential convergence to zero of the observation error for any given delay in the measurements. The implementation of the observer is simple and computer simulations demonstrate its effectiveness.
Theory and Applications of Robust Optimization In this paper we survey the primary research, both theoretical and applied, in the area of robust optimization (RO). Our focus is on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying prominent theoretical results of RO, we also present some recent results linking RO to adaptable models for multistage decision-making problems. Finally, we highlight applications of RO across a wide spectrum of domains, including finance, statistics, learning, and various areas of engineering.
Cache attacks and countermeasures: the case of AES We describe several software side-channel attacks based on inter-process leakage through the state of the CPU’s memory cache. This leakage reveals memory access patterns, which can be used for cryptanalysis of cryptographic primitives that employ data-dependent table lookups. The attacks allow an unprivileged process to attack other processes running in parallel on the same processor, despite partitioning methods such as memory protection, sandboxing and virtualization. Some of our methods require only the ability to trigger services that perform encryption or MAC using the unknown key, such as encrypted disk partitions or secure network links. Moreover, we demonstrate an extremely strong type of attack, which requires knowledge of neither the specific plaintexts nor ciphertexts, and works by merely monitoring the effect of the cryptographic process on the cache. We discuss in detail several such attacks on AES, and experimentally demonstrate their applicability to real systems, such as OpenSSL and Linux’s dm-crypt encrypted partitions (in the latter case, the full key can be recovered after just 800 writes to the partition, taking 65 milliseconds). Finally, we describe several countermeasures for mitigating such attacks.
A normal form for XML documents This paper takes a first step towards the design and normalization theory for XML documents. We show that, like relational databases, XML documents may contain redundant information, and may be prone to update anomalies. Furthermore, such problems are caused by certain functional dependencies among paths in the document. Our goal is to find a way of converting an arbitrary DTD into a well-designed one, that avoids these problems. We first introduce the concept of a functional dependency for XML, and define its semantics via a relational representation of XML. We then define an XML normal form, XNF, that avoids update anomalies and redundancies. We study its properties and show that it generalizes BCNF and a normal form for nested relations when those are appropriately coded as XML documents. Finally, we present a lossless algorithm for converting any DTD into one in XNF.
Synchronization via Pinning Control on General Complex Networks. This paper studies synchronization via pinning control on general complex dynamical networks, such as strongly connected networks, networks with a directed spanning tree, weakly connected networks, and directed forests. A criterion for ensuring network synchronization on strongly connected networks is given. It is found that the vertices with very small in-degrees should be pinned first. In addition, it is shown that the original condition with controllers can be reformulated such that it does not depend on the form of the chosen controllers, which implies that the vertices with very large out-degrees may be pinned. Then, a criterion for achieving synchronization on networks with a directed spanning tree, which can be composed of many strongly connected components, is derived. It is found that the strongly connected components with very few connections from other components should be controlled and the components with many connections from other components can achieve synchronization even without controls. Moreover, a simple but effective pinning algorithm for reaching synchronization on a general complex dynamical network is proposed. Finally, some simulation examples are given to verify the proposed pinning scheme.
Digital signal processors in cellular radio communications Contemporary wireless communications are based on digital communications technologies. The recent commercial success of mobile cellular communications has been enabled in part by successful designs of digital signal processors with appropriate on-chip memories and specialized accelerators for digital transceiver operations. This article provides an overview of fixed point digital signal processors and ways in which they are used in cellular communications. Directions for future wireless-focused DSP technology developments are discussed
Optimum insertion/deletion point selection for fractional sample rate conversion In this paper, an optimum insertion/deletion point selection algorithm for fractional sample rate conversion (SRC) is proposed. The direct insertion/deletion technique achieves low complexity and low power consumption as compared to the other fractional SRC methods. Using a multiple set insertion/deletion technique is efficient for reduction of distortion caused by the insertion/deletion step. When the conversion factor is (N ±¿)/N, the number of possible patterns of insertion/deletion points and the number of combinations for multiple set inserters/deleters grow as ¿ increases. The proposed algorithm minimizes the distortion due to SRC by selecting the patterns and the combinations.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.075556
0.066667
0.066667
0.066667
0
0
0
0
0
0
0
0
0
0
12.4 A 10mW fully integrated 2-to-13V-input buck-boost SC converter with 81.5% peak efficiency In recent years, significant progress has been made on switched-capacitor DC-DC converters as they enable fully integrated on-chip power management. New converter topologies overcame the fixed input-to-output voltage limitation and achieved high efficiency at high power densities [1-6]. SC converters are attractive to not only mobile handheld devices with small input and output voltages, but also for power conversion in IoE, industrial and automotive applications, etc. Such applications need to be capable of handling widely varying input voltages of more than 10V, which requires a large amount of conversion ratios [1-3]. The goal is to achieve a fine granularity with the least number of flying capacitors. In [1] an SC converter was introduced that achieves these goals at low input voltage VIN ≤ 2.5V. [2] shows good efficiency up to VIN = 8V while its conversion ratio is restricted to ≤1/2 with a limited, non-equidistant number of conversion steps. A particular challenge arises with increasing input voltage as several loss mechanisms like parasitic bottom-plate losses and gate-charge losses of high-voltage transistors become of significant influence. High input voltages require supporting circuits like level shifters, auxiliary supply rails etc., which allocate additional area and add losses [2-5]. The combination of both increasing voltage and conversion ratios (VCR) lowers the efficiency and the achievable output power of SC converters. [3] and [5] use external capacitors to enable higher output power, especially for higher VIN. However, this is contradictory to the goal of a fully integrated power supply.
Triple-Mode, Hybrid-Storage, Energy Harvesting Power Management Unit: Achieving High Efficiency Against Harvesting and Load Power Variabilities. This paper presents a triple-mode, hybrid storage, energy-harvesting power management unit (EH PMU) that interfaces a photovoltaic cell, a regulated load, and a rechargeable battery. The objective is to maximize the end-to-end conversion efficiency of the EH PMU against temporal mismatch and variabilities of harvesting and load power. To minimize the involvement (charging or discharging) of a batt...
A Switched Capacitor Energy Harvester Based on a Single-Cycle Criterion for MPPT to Eliminate Storage Capacitor. A single-cycle criterion maximum power point tracking (MPPT) technique is proposed to eliminate the need for bulky on-chip capacitors in the energy harvesting system for Internet of Everything (IoE). The conventional time-domain MPPT features ultra-low power consumption; however, it also requires a nanofarad-level capacitor for fine time resolution. The proposed maximum power monitoring does not r...
A Soft-Charging-Based SC DC–DC Boost Converter With Conversion-Ratio-Insensitive High Efficiency for Energy Harvesting in Miniature Sensor Systems This paper introduces a fully integrated switched-capacitor (SC) DC-DC boost converter suitable for energy harvesting in miniature sensor systems with varying harvesting source and battery voltages. Unlike the conventional SC DC-DC boost converters, where efficiency is optimized only for a few topology-dependent voltage conversion ratios (VCRs), the proposed soft-charging-based SC converter achieved VCR-insensitive evenly high efficiency for a wide range of input and output voltages. A test chip was fabricated in 180nm process, and the measured peak efficiency and average efficiency are 85.3% and 83.6%, respectively, for 1.8V regulated output voltage and a wide range of input voltages (0.95–1.8 V). Also, a peak efficiency of 88.9% and average efficiency of 87.6% are achieved for a wide range of battery output voltages (3.0–4.2 V) and a 1.2V harvesting source input voltage.
Leveraging on-chip voltage regulators as a countermeasure against side-channel attacks Side-channel attacks have become a significant threat to the integrated circuit security. Circuit level techniques are proposed in this paper as a countermeasure against side-channel attacks. A distributed on-chip power delivery system consisting of multi-level switched capacitor (SC) voltage converters is proposed where the individual interleaved stages are turned on and turned off either based on the workload information or pseudo-randomly to scramble the power consumption profile. In the case that the changes in the workload demand do not trigger the power delivery system to turn on or off individual stages, the active stages are reshuffled with so called converter-reshuffling to insert random spikes in the power consumption profile. An entropy based metric is developed to evaluate the security-performance of the proposed converter-reshuffling technique as compared to three other existing on-chip power delivery schemes. The increase in the power trace entropy with CoRe scheme is also demonstrated with simulation results to further verify the theoretical analysis.
An Arithmetic Progression Switched-Capacitor DC-DC Converter with Soft VCR Transitions Achieving 93.7% Peak Efficiency and 400 mA Output Current Dynamic source adaptation and supply modulation can benefit the power efficiency and system functionality of energy-harvesting interfaces, voltage-scalable SoCs, device drivers, power amplifiers, and others. A switched-capacitor (SC) DC-DC converter can achieve high power conversion efficiency (PCE) and power density at the hundreds-of-mW. Several reconfigurable SC topologies emerged to generate m...
A Recursive Switched-Capacitor DC-DC Converter Achieving $2^{N}-1$ Ratios With High Efficiency Over a Wide Output Voltage Range. A Recursive Switched-Capacitor (RSC) topology is introduced that enables reconfiguration among 2 N-1 conversion ratios while achieving minimal capacitive charge-sharing loss for a given silicon area. All 2 N-1 ratios are realized by strategically interconnecting N 2:1 SC cells either in series, in parallel, or in a stacked configuration such that the number of input and ground connections are maxi...
Quick detection of difficult bugs for effective post-silicon validation We present a new technique for systematically creating postsilicon validation tests that quickly detect bugs in processor cores and uncore components (cache controllers, memory controllers, on-chip networks) of multi-core System on Chips (SoCs). Such quick detection is essential because long error detection latency, the time elapsed between the occurrence of an error due to a bug and its manifestation as an observable failure, severely limits the effectiveness of existing post-silicon validation approaches. In addition, we provide a list of realistic bug scenarios abstracted from “difficult” bugs that occurred in commercial multi-core SoCs. Our results for an OpenSPARC T2-like multi-core SoC demonstrate: 1. Error detection latencies of “typical” post-silicon validation tests can be very long, up to billions of clock cycles, especially for bugs in uncore components. 2. Our new technique shortens error detection latencies by several orders of magnitude to only a few hundred cycles for most bug scenarios. 3. Our new technique enables 2-fold increase in bug coverage. An important feature of our technique is its software-only implementation without any hardware modification. Hence, it is readily applicable to existing designs.
Searching in an unknown environment: an optimal randomized algorithm for the cow-path problem Searching for a goal is a central and extensively studied problem in computer science. In classical searching problems, the cost of a search function is simply the number of queries made to an oracle that knows the position of the goal. In many robotics problems, as well as in problems from other areas, we want to charge a cost proportional to the distance between queries (e.g., the time required to travel between two query points). With this cost function in mind, the abstract problem known as the w -lane cow-path problem was designed. There are known optimal deterministic algorithms for the cow-path problem; we give the first randomized algorithm in this paper. We show that our algorithm is optimal for two paths ( w =2) and give evidence that it is optimal for larger values of w . Subsequent to the preliminary version of this paper, Kao et al. ( in “Proceedings, 5th ACM–SIAM Symposium on Discrete Algorithm,” pp. 372–381, 1994) have shown that our algorithm is indeed optimal for all w ⩾2. Our randomized algorithm gives expected performance that is almost twice as good as is possible with a deterministic algorithm. For the performance of our algorithm, we also derive the asymptotic growth with respect to w —despite similar complexity results for related problems, it appears that this growth has never been analyzed.
Incremental Stochastic Subgradient Algorithms for Convex Optimization This paper studies the effect of stochastic errors on two constrained incremental subgradient algorithms. The incremental subgradient algorithms are viewed as decentralized network optimization algorithms as applied to minimize a sum of functions, when each component function is known only to a particular agent of a distributed network. First, the standard cyclic incremental subgradient algorithm is studied. In this, the agents form a ring structure and pass the iterate in a cycle. When there are stochastic errors in the subgradient evaluations, sufficient conditions on the moments of the stochastic errors are obtained that guarantee almost sure convergence when a diminishing step-size is used. In addition, almost sure bounds on the algorithm's performance with a constant step-size are also obtained. Next, the Markov randomized incremental subgradient method is studied. This is a noncyclic version of the incremental algorithm where the sequence of computing agents is modeled as a time nonhomogeneous Markov chain. Such a model is appropriate for mobile networks, as the network topology changes across time in these networks. Convergence results and error bounds for the Markov randomized method in the presence of stochastic errors for diminishing and constant step-sizes are obtained.
A framework for security on NoC technologies Multiple heterogeneous processor cores, memory cores and application specific IP cores integrated in a communication network, also known as networks on chips (NoCs), will handle a large number of applications including security. Although NoCs offer more resistance to bus probing attacks, power/EM attacks and network snooping attacks are relevant. For the first time, a framework for security on NoC at both the network level (or transport layer) and at the core level (or application layer) is proposed. At the network level, each IP core has a security wrapper and a key-keeper core is included in the NoC, protecting encrypted private and public keys. Using this framework, unencrypted keys are prevented from leaving the cores and NoC. This is crucial to prevent untrusted software on or off the NoC from gaining access to keys. At the core level (application layer) the security framework is illustrated with software modification for resistance against power attacks with extremely low overheads in energy. With the emergence of secure IP cores in the market and nanometer technologies, a security framework for designing NoCs is crucial for supporting future wireless Internet enabled devices.
Modeling of software radio aspects by mapping of SDL and CORBA With the evolution of 3rd generation mobile communications standardization, the software radio concept has the potential to offer a pragmatic solution - a software implementation that allows the mobile terminal to adapt dynamically to its radio environment. The mapping of SDL and CORBA mechanisms is introduced, in order to provide a generic platform for the implementation of future mobile services, supporting standardized interfaces and manufacturer platform independent object and service functionality description. For the functional entity diagram model, it is proposed that the functional entities be designed as objects, the functional entities group as 'open' object oriented SDL platforms, and the interfaces between them as CORBA IDLs, communicating via the ORB in a generic implementation and location independent way. The functional entity groups are proposed to be modeled as SDL block types, while the functional entities and sub-entities as SDL process and service types. The objects interact with each other like client or server objects requesting or receiving services from other objects. Every object has a CORBA IDL interface, which allows every component to be distributed in an optimum way by providing a standardized infrastructure, ensuring interoperability, flexibility, reusability, transparency and management capabilities.
NDC: Analyzing the impact of 3D-stacked memory+logic devices on MapReduce workloads While Processing-in-Memory has been investigated for decades, it has not been embraced commercially. A number of emerging technologies have renewed interest in this topic. In particular, the emergence of 3D stacking and the imminent release of Micron's Hybrid Memory Cube device have made it more practical to move computation near memory. However, the literature is missing a detailed analysis of a killer application that can leverage a Near Data Computing (NDC) architecture. This paper focuses on in-memory MapReduce workloads that are commercially important and are especially suitable for NDC because of their embarrassing parallelism and largely localized memory accesses. The NDC architecture incorporates several simple processing cores on a separate, non-memory die in a 3D-stacked memory package; these cores can perform Map operations with efficient memory access and without hitting the bandwidth wall. This paper describes and evaluates a number of key elements necessary in realizing efficient NDC operation: (i) low-EPI cores, (ii) long daisy chains of memory devices, (iii) the dynamic activation of cores and SerDes links. Compared to a baseline that is heavily optimized for MapReduce execution, the NDC design yields up to 15X reduction in execution time and 18X reduction in system energy.
A Bidirectional Neural Interface IC With Chopper Stabilized BioADC Array and Charge Balanced Stimulator. We present a bidirectional neural interface with a 4-channel biopotential analog-to-digital converter (bioADC) and a 4-channel current-mode stimulator in 180 nm CMOS. The bioADC directly transduces microvolt biopotentials into a digital representation without a voltage-amplification stage. Each bioADC channel comprises a continuous-time first-order ΔΣ modulator with a chopper-stabilized OTA input ...
1.1
0.1
0.1
0.1
0.05
0.05
0.011111
0
0
0
0
0
0
0
All-Digital PLL for Bluetooth Low Energy Using 32.768-kHz Reference Clock and ≤0.45-V Supply. In this paper, we introduce an all-digital phase-locked loop (ADPLL) for Bluetooth low energy (BLE) that eliminates the need for a crystal oscillator (XO) other than a 32.768-kHz real-time clock (RTC) already present in wireless systems. Specifically, we propose to replace the conventional channel settling with a band settling that would be carried out only once per global device power up. The ADP...
Signal Folding in A/D Converters Signal folding appears in A/D converters (ADCs) in various ways. In this paper, the evolution of this technique is derived from the fundamentals of quantization to obtain systematic insights. We look upon folding as an automatic multiplexing of zero crossings, which simplifies hardware while preserving the high speed and low latency of a flash ADC. By appreciating similarities between the well-kno...
A 45 nm Resilient Microprocessor Core for Dynamic Variation Tolerance A 45 nm microprocessor core integrates resilient error-detection and recovery circuits to mitigate the clock frequency (FCLK) guardbands for dynamic parameter variations to improve throughput and energy efficiency. The core supports two distinct error-detection designs, allowing a direct comparison of the relative trade-offs. The first design embeds error-detection sequential (EDS) circuits in critical paths to detect late timing transitions. In addition to reducing the Fclk guardbands for dynamic variations, the embedded EDS design can exploit path-activation rates to operate the microprocessor faster than infrequently-activated critical paths. The second error-detection design offers a less-intrusive approach for dynamic timing-error detection by placing a tunable replica circuit (TRC) per pipeline stage to monitor worst-case delays. Although the TRCs require a delay guardband to ensure the TRC delay is always slower than critical-path delays, the TRC design captures most of the benefits from the embedded EDS design with less implementation overhead. Furthermore, while core min-delay constraints limit the potential benefits of the embedded EDS design, a salient advantage of the TRC design is the ability to detect a wider range of dynamic delay variation, as demonstrated through low supply voltage (VCC) measurements. Both error-detection designs interface with error-recovery techniques, enabling the detection and correction of timing errors from fast-changing variations such as high-frequency VCC droops. The microprocessor core also supports two separate error-recovery techniques to guarantee correct execution even if dynamic variations persist. The first technique requires clock control to replay errant instructions at 1/2FCLK. In comparison, the second technique is a new multiple-issue instruction replay design that corrects errant instructions with a lower performance penalty and without requiring clock control. Silico- - n measurements demonstrate that resilient circuits enable a 41% throughput gain at equal energy or a 22% energy reduction at equal throughput, as compared to a conventional design when executing a benchmark program with a 10% VCC droop. In addition, the microprocessor includes a new adaptive clock control circuit that interfaces with the resilient circuits and a phase-locked loop (PLL) to track recovery cycles and adapt to persistent errors by dynamically changing Fclk f°Γ maximum efficiency.
A Mostly Digital VCO-Based CT-SDM With Third-Order Noise Shaping. This paper presents the architectural concept and implementation of a mostly digital voltage-controlled oscillator-analog-to-digital converter (VCO-ADC) with third-order quantization noise shaping. The system is based on the combination of a VCO and a digital counter. It is shown how this combination can function as a continuous-time integrator to form a high-order continuous-time sigma-delta modu...
A 0.5-V 1.6-mW 2.4-GHz Fractional-N All-Digital PLL for Bluetooth LE With PVT-Insensitive TDC Using Switched-Capacitor Doubler in 28-nm CMOS. This paper proposes an ultra-low-voltage (ULV) fractional-N all-digital PLL (ADPLL) powered from a single 0.5-V supply. While its digitally controlled oscillator (DCO) runs directly at 0.5 V, an internal switched-capacitor dc-dc converter “doubles” the supply voltage to all the digital circuitry and particularly regulates the time-to-digital converter (TDC) supply to stabilize its resolution, thus...
Analysis and Optimum Design of a Class E RF Power Amplifier A new analysis of a class E power amplifier is presented and a fully analytic design approach is developed. Using our analysis, all of the circuit currents and voltages and, hence, the power dissipation in each component is calculated as a function of a key design parameter, denoted by x. This parameter is the ratio of the resonance frequency of the shunt inductor and shunt capacitor to the operat...
A 0.6-mW 16-FSK Receiver Achieving a Sensitivity of −103 dBm at 100 kb/s This article presents an RF receiver (RX) designed to exploit the inherent SNR advantage offered by non-coherent 16-FSK modulation relative to more conventional non-coherent modulation schemes, such as FSK and OOK. Specifically, this article demonstrates that when demodulated using two-pole bandpass filters, 16-FSK offers a 4-dB sensitivity advantage compared with binary frequency shift keying (BFSK), at the cost of reduced spectral efficiency at the same data rate. This article then presents the design of a 16-FSK-compatible RX front end, which performs demodulation through 16 N-path filters driven by temperature-stabilized phase-locked loops to ensure calibration-free filter center frequency control, along with augmented Miller capacitors for tight area-constrained bandwidth control. Implemented in 65-nm CMOS, the RX consumes 0.6 mW from a 0.5-V supply while achieving a sensitivity of -103.2 dBm at 100 kb/s, for a power-sensitivity-data-rate figure of merit of 185.2 dB, which represents a 3.2 dB advance over state of the art.
The part-time parliament Recent archaeological discoveries on the island of Paxos reveal that the parliament functioned despite the peripatetic propensity of its part-time legislators. The legislators maintained consistent copies of the parliamentary record, despite their frequent forays from the chamber and the forgetfulness of their messengers. The Paxon parliament's protocol provides a new way of implementing the state machine approach to the design of distributed systems.
Multi-Strategy Coevolving Aging Particle Optimization We propose Multi-Strategy Coevolving Aging Particles (MS-CAP), a novel population-based algorithm for black-box optimization. In a memetic fashion, MS-CAP combines two components with complementary algorithm logics. In the first stage, each particle is perturbed independently along each dimension with a progressively shrinking (decaying) radius, and attracted towards the current best solution with an increasing force. In the second phase, the particles are mutated and recombined according to a multi-strategy approach in the fashion of the ensemble of mutation strategies in Differential Evolution. The proposed algorithm is tested, at different dimensionalities, on two complete black-box optimization benchmarks proposed at the Congress on Evolutionary Computation 2010 and 2013. To demonstrate the applicability of the approach, we also test MS-CAP to train a Feedforward Neural Network modeling the kinematics of an 8-link robot manipulator. The numerical results show that MS-CAP, for the setting considered in this study, tends to outperform the state-of-the-art optimization algorithms on a large set of problems, thus resulting in a robust and versatile optimizer.
The software radio concept Since early 1980 an exponential blowup of cellular mobile systems has been observed, which has produced, all over the world, the definition of a plethora of analog and digital standards. In 2000 the industrial competition between Asia, Europe, and America promises a very difficult path toward the definition of a unique standard for future mobile systems, although market analyses underline the trading benefits of a common worldwide standard. It is therefore in this field that the software radio concept is emerging as a potential pragmatic solution: a software implementation of the user terminal able to dynamically adapt to the radio environment in which it is, time by time, located. In fact, the term software radio stands for radio functionalities defined by software, meaning the possibility to define by software the typical functionality of a radio interface, usually implemented in TX and RX equipment by dedicated hardware. The presence of the software defining the radio interface necessarily implies the use of DSPs to replace dedicated hardware, to execute, in real time, the necessary software. In this article objectives, advantages, problem areas, and technological challenges of software radio are addressed. In particular, SW radio transceiver architecture, possible SW implementation, and its download are analyzed
Dynamic sensor collaboration via sequential Monte Carlo We consider the application of sequential Monte Carlo (SMC) methods for Bayesian inference to the problem of information-driven dynamic sensor collaboration in clutter environments for sensor networks. The dynamics of the system under consideration are described by nonlinear sensing models within randomly deployed sensor nodes. The exact solution to this problem is prohibitively complex due to the nonlinear nature of the system. The SMC methods are, therefore, employed to track the probabilistic dynamics of the system and to make the corresponding Bayesian estimates and predictions. To meet the specific requirements inherent in sensor network, such as low-power consumption and collaborative information processing, we propose a novel SMC solution that makes use of the auxiliary particle filter technique for data fusion at densely deployed sensor nodes, and the collapsed kernel representation of the a posteriori distribution for information exchange between sensor nodes. Furthermore, an efficient numerical method is proposed for approximating the entropy-based information utility in sensor selection. It is seen that under the SMC framework, the optimal sensor selection and collaboration can be implemented naturally, and significant improvement is achieved over existing methods in terms of localizing and tracking accuracies.
Clocking Analysis, Implementation and Measurement Techniques for High-Speed Data Links—A Tutorial The performance of high-speed wireline data links depend crucially on the quality and precision of their clocking infrastructure. For future applications, such as microprocessor systems that require terabytes/s of aggregate bandwidth, signaling system designers will have to become even more aware of detailed clock design tradeoffs in order to jointly optimize I/O power, bandwidth, reliability, silicon area and testability. The goal of this tutorial is to assist I/O circuit and system designers in developing intuitive and practical understanding of I/O clocking tradeoffs at all levels of the link hierarchy from the circuit-level implementation to system-level architecture.
Computing the Dynamic Diameter of Non-Deterministic Dynamic Networks is Hard. A dynamic network is a communication network whose communication structure can evolve over time. The dynamic diameter is the counterpart of the classical static diameter, it is the maximum time needed for a node to causally influence any other node in the network. We consider the problem of computing the dynamic diameter of a given dynamic network. If the evolution is known a priori, that is if the network is deterministic, it is known it is quite easy to compute this dynamic diameter. If the evolution is not known a priori, that is if the network is non-deterministic, we show that the problem is hard to solve or approximate. In some cases, this hardness holds also when there is a static connected subgraph for the dynamic network. In this note, we consider an important subfamily of non-deterministic dynamic networks: the time-homogeneous dynamic networks. We prove that it is hard to compute and approximate the value of the dynamic diameter for time-homogeneous dynamic networks.
A 0.5 V 10-bit 3 MS/s SAR ADC With Adaptive-Reset Switching Scheme and Near-Threshold Voltage-Optimized Design Technique This brief presents a 10-bit ultra-low power energy-efficient successive approximation register (SAR) analog-to-digital converter (ADC). A new adaptive-reset switching scheme is proposed to reduce the switching energy of the capacitive digital-to-analog converter (CDAC). The proposed adaptive-reset switching scheme reduces the average switching energy of the CDAC by 90% compared to the conventional scheme without the common-mode voltage variation. In addition, the near-threshold voltage (NTV)-optimized digital library is adopted to alleviate the performance degradation in the ultra-low supply voltage while simultaneously increasing the energy efficiency. The NTV-optimized design technique is also introduced to the bootstrapped switch design to improve the linearity of the sample-and-hold circuit. The test chip is fabricated in a 65 nm CMOS, and its core area is 0.022 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> . At a supply of 0.5 V and sampling speed of 3 MS/s, the SAR ADC achieves an ENOB of 8.78 bit and consumes <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$3.09~{\boldsymbol{\mu }}\text{W}$ </tex-math></inline-formula> . The resultant Walden figure-of-merit (FoM) is 2.34 fJ/conv.-step.
1.11
0.12
0.12
0.12
0.12
0.1
0.05
0
0
0
0
0
0
0
Multimicrogrid Load Balancing Through EV Charging Networks Energy demand and supply vary from area to area, where an unbalanced load may occur and endanger the system security constraints and cause significant differences in the locational marginal price (LMP) in the power system. With the increasing proportion of local renewable energy (RE) sources in microgrids that are connected to the power grid and the growing number of electric vehicle (EV) charging...
Spatial and Temporal Model of Electric Vehicle Charging Demand. This paper presents a spatial and temporal model of electric vehicle charging demand for a rapid charging station located near a highway exit. Most previous studies have assumed a fixed charging location and fixed charging time during the off-peak hours for anticipating electric vehicle charging demand. Some other studies have based on limited charging scenarios at typical locations instead of a m...
Hybrid Optimization for Economic Deployment of ESS in PV-Integrated EV Charging Stations. Electric vehicle (EV) charging stations will play an important role in the smart city. Uncoordinated and statistical EV charging loads would further stress the distribution system. Photovoltaic (PV) systems, which can reduce this stress, also show variation due to weather conditions. In this paper, a hybrid optimization algorithm for energy storage management is proposed, which shifts its mode of ...
Dynamic Pricing for Electric Vehicle Extreme Fast Charging Significant developments and advancement pertaining to electric vehicle (EV) technologies, such as extreme fast charging (XFC), have been witnessed in the last decade. However, there are still many challenges to the wider deployment of EVs. One of the major barriers is its availability of fast charging stations. A possible solution is to build a fast charging sharing system, by encouraging small business owners or even householders to install and share their fast charging devices, by reselling electricity energy sourced from traditional utility companies or their own solar grid. To incentivize such a system, a smart dynamic pricing scheme is needed to facilitate those growing markets with fast charging stations. The pricing scheme is expected to take into account the dynamics intertwined with pricing, demand, and environment factors, in an effort to maximize the long-term profit with the optimal price. To this end, this paper formulates the problem of dynamic pricing for fast charging as a Markov decision process and accordingly proposes several algorithmic schemes for different applications. Experimental study is conducted with useful and interesting insights.
Enabling Extreme Fast Charging Technology for Electric Vehicles As a significant part of the next-generation smart grid, electric vehicles (EVs) are essential for most countries to achieve energy independence, secure energy supply, and alleviate the pressure on environmental protection and energy security. Although EVs have grown rapidly, the slow recharge time is still the biggest obstacle to a wider application. While gasoline vehicles can pump enough gasoline in less than ten minutes, which can carry themselves a few hundred miles. However, most of today’s fast-charging techniques take half an hour only to provide very limited miles of electric driving range.
Electric Vehicles with a Battery Switching Station: Adoption and Environmental Impact The transportation sector's carbon footprint and dependence on oil are of deep concern to policy makers in many countries. Use of all-electric drive trains is arguably the most realistic medium-term solution to address these concerns. However, motorist anxiety induced by an electric vehicle's limited range and high battery cost have constrained consumer adoption. A novel switching-station-based solution is touted as a promising remedy. Vehicles use standardized batteries that, when depleted, can be switched for fully charged batteries at switching stations, and motorists only pay for battery use. We build a model that highlights the key mechanisms driving adoption and use of electric vehicles in this new switching-station-based electric vehicle system and contrast it with conventional electric vehicles. Our model employs results from repairable item inventory theory to capture switching-station operation; we embed this model in a behavioral model of motorist use and adoption. Switching-station systems effectively transfer range risk from motorists to the station operator, who, through statistical economies of scale, can better manage it. We find that this transfer of risk can lead to higher electric vehicle adoption than in a conventional system, but it also encourages more driving than a conventional system does. We calibrate our models with motorist behavior data, electric vehicle technology data, operation costs, and emissions data to estimate the relative effectiveness of the two systems under the status quo and other plausible future scenarios. We find that the system that is more effective at reducing emissions is often less effective at reducing oil dependence, and the misalignment between the two objectives is most severe when the energy mix is coal heavy and has advanced battery technology. Increases in gasoline prices by imposition of taxes, for instance are much more effective in reducing carbon emissions, whereas battery-price-reducing policy interventions are more effective for reducing oil dependence. Taken together, our results help a policy maker identify the superior system for achieving the desired objectives. They also highlight that policy makers should not conflate the dual objectives of oil dependence and emissions reductions as the preferred system, and the policy interventions that further that system may be different for the two objectives.This paper was accepted by Yossi Aviv, operations management.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
Scratchpad memory: design alternative for cache on-chip memory in embedded systems In this paper we address the problem of on-chip memory selection for computationally intensive applications, by proposing scratch pad memory as an alternative to cache. Area and energy for different scratch pad and cache sizes are computed using the CACTI tool while performance was evaluated using the trace results of the simulator. The target processor chosen for evaluation was AT91M40400. The results clearly establish scratehpad memory as a low power alternative in most situations with an average energy reducation of 40%. Further the average area-time reduction for the seratchpad memory was 46% of the cache memory.
Approximate counting, uniform generation and rapidly mixing Markov chains The paper studies effective approximate solutions to combinatorial counting and unform generation problems. Using a technique based on the simulation of ergodic Markov chains, it is shown that, for self-reducible structures, almost uniform generation is possible in polynomial time provided only that randomised approximate counting to within some arbitrary polynomial factor is possible in polynomial time. It follows that, for self-reducible structures, polynomial time randomised algorithms for counting to within factors of the form (1 + n − β ) are available either for all β ϵ R or for no β ϵ R . A substantial part of the paper is devoted to investigating the rate of convergence of finite ergodic Markov chains, and a simple but powerful characterisation of rapid convergence for a broad class of chains based on a structural property of the underlying graph is established. Finally, the general techniques of the paper are used to derive an almost uniform generation procedure for labelled graphs with a given degree sequence which is valid over a much wider range of degrees than previous methods: this in turn leads to randomised approximate counting algorithms for these graphs with very good asymptotic behaviour.
A theory of nonsubtractive dither A detailed mathematical investigation of multibit quantizing systems using nonsubtractive dither is presented. It is shown that by the use of dither having a suitably chosen probability density function, moments of the total error can be made independent of the system input signal but that statistical independence of the error and the input signals is not achievable. Similarly, it is demonstrated that values of the total error signal cannot generally be rendered statistically independent of one another but that their joint moments can be controlled and that, in particular, the error sequence can be rendered spectrally white. The properties of some practical dither signals are explored, and recommendations are made for dithering in audio, video, and measurement applications. The paper collects all of the important results on the subject of nonsubtractive dithering and introduces important new ones with the goal of alleviating persistent and widespread misunderstandings regarding the technique
DySER: Unifying Functionality and Parallelism Specialization for Energy-Efficient Computing The DySER (Dynamically Specializing Execution Resources) architecture supports both functionality specialization and parallelism specialization. By dynamically specializing frequently executing regions and applying parallelism mechanisms, DySER provides efficient functionality and parallelism specialization. It outperforms an out-of-order CPU, Streaming SIMD Extensions (SSE) acceleration, and GPU acceleration while consuming less energy. The full-system field-programmable gate array (FPGA) prototype of DySER integrated into OpenSparc demonstrates a practical implementation.
An Identity Authentication Mechanism Based on Timing Covert Channel In the identity authentication, many advanced encryption techniques are applied to confirm and protect the user identity. Although the identity information is transmitted as cipher text in the Internet, the attackers can theft and fraud the identity by eavesdropping, cryptanalysis and forging. In this paper, a new identity authentication mechanism is proposed, which exploits the Timing Covert Channel (TCC) to transmit the identity information. TCC was originally a hacker technique to leak information under supervising, which uses the sending time of packets to indicate the information. In our method, the intervals between packets are applied to indicate the authentication tags. It is difficult for the attackers to eavesdrop, crack and forge the TCC identity, since the packets are too huge to analyze and the noise is different between the users and the attackers. A platform is designed to verify our proposed method. The experiment shows that the intervals and the thresholds are the key factors on the accuracy and efficiency. And it also proves our method is a secure way for identity information, which could be implanted on various network applications.
Robust compensation of a chattering time-varying input delay We investigate the design of a prediction-based controller for a linear system subject to a time-varying input delay, not necessarily causal. This means that the information feeding the system can be older than ones previously received. We propose to use the current delay value in the prediction employed in the control law. Modeling the input delay as a transport Partial Differential Equation, we prove asymptotic tracking of the system state, providing that the average ℒ2-norm of the delay time-derivative is sufficiently small. This result is obtained by generalizing Halanay inequality to time-varying differential inequalities.
A Bidirectional Neural Interface IC With Chopper Stabilized BioADC Array and Charge Balanced Stimulator. We present a bidirectional neural interface with a 4-channel biopotential analog-to-digital converter (bioADC) and a 4-channel current-mode stimulator in 180 nm CMOS. The bioADC directly transduces microvolt biopotentials into a digital representation without a voltage-amplification stage. Each bioADC channel comprises a continuous-time first-order ΔΣ modulator with a chopper-stabilized OTA input ...
1.1
0.1
0.1
0.1
0.1
0.05
0
0
0
0
0
0
0
0
A 4-bit Mixed-Signal MAC Array with Swing Enhancement and Local Kernel Memory Modern deep neural networks require energy- and area-efficient multi-bit multiply-accumulate (MAC) functions. In-memory computing (IMC) with analog accumulation has shown the potential to outperform purely digital solutions but lacks efficient multi-bit computation. In this work, we explore the design of a mixed-signal, charge-sharing compute array that performs 4-bit MAC operations within one clo...
A Low-Power Speech Recognizer and Voice Activity Detector Using Deep Neural Networks. This paper describes digital circuit architectures for automatic speech recognition (ASR) and voice activity detection (VAD) with improved accuracy, programmability, and scalability. Our ASR architecture is designed to minimize off-chip memory bandwidth, which is the main driver of system power consumption. A SIMD processor with 32 parallel execution units efficiently evaluates feed-forward deep n...
Design of an Always-On Deep Neural Network-Based 1-<inline-formula> <tex-math notation="LaTeX">$\mu$ </tex-math></inline-formula>W Voice Activity Detector Aided With a Customized Software Model for Analog Feature Extraction This paper presents an ultra-low-power voice activity detector (VAD). It uses analog signal processing for acoustic feature extraction (AFE) directly on the microphone output, approximate event-driven analog-to-digital conversion (ED-ADC), and digital deep neural network (DNN) for speech/non-speech classification. New circuits, including the low-noise amplifier, bandpass filter, and full-wave rectifier contribute to the more than 9 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\times $ </tex-math></inline-formula> normalized power/channel reduction in the feature extraction front-end compared to the best prior art. The digital DNN is a three-hidden-layer binarized multilayer perceptron (MLP) with a 2-neuron output layer and a 48-neuron input layer that receives parallel event streams from the ED-ADCs. To obtain the DNN weights via off-line training, a customized front-end model written in python is constructed to accelerate feature generation in software emulation, and the model parameters are extracted from Spectre simulations. The chip, fabricated in 0.18- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{m}$ </tex-math></inline-formula> CMOS, has a core area of 1.66 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\times $ </tex-math></inline-formula> 1.52 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> and consumes 1 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> . The classification measurements using the 1-hour 10-dB signal-to-noise ratio audio with restaurant background noise show a mean speech/non-speech hit rate of 84.4%/85.4% with a 1.88%/4.65% 1- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\sigma $ </tex-math></inline-formula> variation across ten dies that are all loaded with the same weights.
A 7.3 M Output Non-Zeros/J, 11.7 M Output Non-Zeros/GB Reconfigurable Sparse Matrix-Matrix Multiplication Accelerator. A sparse matrix-matrix multiplication (SpMM) accelerator with 48 heterogeneous cores and a reconfigurable memory hierarchy is fabricated in 40-nm CMOS. The compute fabric consists of dedicated floating-point multiplication units, and general-purpose Arm Cortex-M0 and Cortex-M4 cores. The on-chip memory reconfigures scratchpad or cache, depending on the phase of the algorithm. The memory and comput...
BinarEye: An always-on energy-accuracy-scalable binary CNN processor with all memory on chip in 28nm CMOS This paper introduces BinarEye: the first digital processor for always-on Binary Convolutional Neural Networks. The chip maximizes data reuse through a Neuron Array exploiting local weight Flip-Flops. It stores full network models and feature maps and hence requires no off-chip bandwidth, which leads to a 230 lb-TOPS/W peak efficiency. Its 3-levels of flexibility - (a) weight reconfiguration, (b) a programmable network depth and (c) a programmable network width - allow trading energy for accuracy depending on the task's requirements. BinarEye's full-system input-to-label energy consumption ranges from 14.4uJ/f for 86%/CIFAR-10 and 98%/owner recognition down to 0.92uJ/f for 94%/face detection at up to 1700 frames per second. This is 3-12-70× more efficient than the state-of-the-art at on par accuracy.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Directed diffusion: a scalable and robust communication paradigm for sensor networks Advances in processor, memory and radio technology will enable small and cheap nodes capable of sensing, communication and computation. Networks of such nodes can coordinate to perform distributed sensing of environmental phenomena. In this paper, we explore the directed diffusion paradigm for such coordination. Directed diffusion is datacentric in that all communication is for named data. All nodes in a directed diffusion-based network are application-aware. This enables diffusion to achieve energy savings by selecting empirically good paths and by caching and processing data in-network. We explore and evaluate the use of directed diffusion for a simple remote-surveillance sensor network.
ImageNet Large Scale Visual Recognition Challenge. The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.
DieHard: probabilistic memory safety for unsafe languages Applications written in unsafe languages like C and C++ are vulnerable to memory errors such as buffer overflows, dangling pointers, and reads of uninitialized data. Such errors can lead to program crashes, security vulnerabilities, and unpredictable behavior. We present DieHard, a runtime system that tolerates these errors while probabilistically maintaining soundness. DieHard uses randomization and replication to achieve probabilistic memory safety by approximating an infinite-sized heap. DieHard's memory manager randomizes the location of objects in a heap that is at least twice as large as required. This algorithm prevents heap corruption and provides a probabilistic guarantee of avoiding memory errors. For additional safety, DieHard can operate in a replicated mode where multiple replicas of the same application are run simultaneously. By initializing each replica with a different random seed and requiring agreement on output, the replicated version of Die-Hard increases the likelihood of correct execution because errors are unlikely to have the same effect across all replicas. We present analytical and experimental results that show DieHard's resilience to a wide range of memory errors, including a heap-based buffer overflow in an actual application.
A Clustering Scheme For Hierarchical Control In Mufti-Hop Wireless Networks In this paper we present a clustering scheme to create a hierarchical control structure for mufti-hop wireless networks. A cluster is defined as a subset of vertices, whose induced graph is connected. In addition, a cluster is required to obey certain constraints that are useful for management and scalability of the hierarchy. All these constraints cannot be met simultaneously for general graphs, but we show how such a clustering can be obtained for wireless network topologies. Finally, we present an efficient distributed implementation of our clustering algorithm for a set of wireless nodes to create the set of desired clusters.
A survey of state and disturbance observers for practitioners This paper gives a unified and historical review of observer design for the benefit of practitioners. It is unified in the sense that all observers are examined in terms of: 1) the assumed dynamic structure of the plant; 2) the required information, including the input signals and modeling information of the plant; and 3) the implementation equation of the observer. This allows a practitioner, with a particular observer design problem in mind, to quickly find a suitable solution. The review is historical in the sense that it follows the evolution of ideas in observer design in the last half century. From the distinction in problem formulation, required modeling information and the observer design goal, we can see two schools of thought: one is developed in the framework of modern control theory; the other is based on disturbance estimation, which has been, to some extent, overlooked
A MIMO decoder accelerator for next generation wireless communications In this paper, we present a multi-input-multi-output (MIMO) decoder accelerator architecture that offers versatility and reprogrammability while maintaining a very high performance-cost metric. The accelerator is meant to address the MIMO decoding bottlenecks associated with the convergence of multiple high-speed wireless standards onto a single device. It is scalable in the number of antennas, bandwidth, modulation format, and most importantly, present and emerging decoder algorithms. It features a Harvard-like architecture with complex vector operands and a deeply pipelined fixed-point complex arithmetic processing unit. When implemented on a Xilinx Virtex-4 LX200FF1513 field-programmable gate array (FPGA), the design occupied 43% of overall FPGA resources. The accelerator shows an advantage of up to three orders of magnitude (1000 times) in power-delay product for typical MIMO decoding operations relative to a general purpose DSP. When compared to dedicated application-specific IC (ASIC) implementations of mmse MIMO decoders, the accelerator showed a degradation of 340%-17%, depending on the actual ASIC being considered. In order to optimize the design for both speed and area, specific challenges had to be overcome. These include: definition of the processing units and their interconnection; proper dynamic scaling of the signal; and memory partitioning and parallelism.
27.9 A 200kS/s 13.5b integrated-fluxgate differential-magnetic-to-digital converter with an oversampling compensation loop for contactless current sensing High voltage applications such as electric motor controllers, solar panel power inverters, electric vehicle battery chargers, uninterrupted and switching mode power supplies benefit from the galvanic isolation of contactless current sensors (CCS) [1]. These include magnetic sensors that sense the magnetic field emanating from a current-carrying conductor. The offset and resolution of Hall-effect sensors is in the μT-level [1-3], in contrast to the μT-level accuracy of integrated-fluxgate (IFG) magnetometers [4]. Previously reported sampled-data closed-loop IFG readouts have limited BWs as their sampling frequencies (4) are limited to be less than or equal to the IFG excitation frequency, fEXC [5-7]. This paper describes a differential closed-loop IFG CCS with fs>fEXC. The differential architecture rejects magnetic stray fields and achieves 750x larger BW than the prior closed-loop IFG readouts [6-7] with 10×better offset than the Hall-effect sensors [1-3].
A VCO-Based Nonuniform Sampling ADC Using a Slope-Dependent Pulse Generator This paper presents a voltage-controlled oscillator (VCO)-based nonuniform sampling analog-to-digital converter (ADC) as an alternative to the level-crossing (LC)-based converters for digitizing biopotential signals. This work aims to provide a good signal-to-noise-and-distortion ratio at a low average sampling rate. In the proposed conversion method, a slope-dependent pulse generation block is used to provide a variable sample rate adjusted according to the input signal's slope. Simulation results show that the introduced method meets a target reconstruction quality with a sampling rate approaching 92 Sps, while on the same MIT-BIH Arrhythmia N 106 ECG benchmark, the classic LC-based approach requires a sampling rate higher than 500 Sps. The benefits of the proposed method are more remarkable when the input signal is very noisy. The proposed ADC achieves a compression ratio close to 4, but with only 5.4% root-mean-square difference when tested using the MIT-BIH Arrhythmia Database.
1.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
0
Concurrent Multi-Band Low-Noise Amplifier. With rapid expansions of wireless communications, requirements for transceivers that support concurrent multiple services are continuously increasing and demanding design of a concurrent low-noise amplifier (LNA) with low noise figure (NF), high gain, and high linearity over a wide frequency range for various wireless applications. The proposed work focuses on a concurrent multi-band LNA that works at navigational frequencies, namely, of 1.2 GHz and 1.5 GHz, wireless communication frequencies, namely, of 2.45 GHz and 3.3 GHz, dedicated short range communication (DSRC) frequency of 5.8 GHz for the vehicular communication applications. This circuit has a distinct input matching network which resonates at all desired five frequency bands and is achieved by adapting frequency transformation method. To accomplish simultaneous reception of the desired penta-band, the output matching is designed with simple LC matching network with the aid of load-pull methodology.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
All-Digital Background Calibration Technique for Time-Interleaved ADC Using Pseudo Aliasing Signal A new digital background calibration technique for gain mismatches and sample-time mismatches in a Time-Interleaved Analog-to-Digital Converter (TI-ADC) is presented to reduce the circuit area. In the proposed technique, the gain mismatches and the sample-time mismatches are calibrated by using pseudo aliasing signals instead of using a bank of adaptive FIR filters which is conventionally utilized. The pseudo aliasing signals are generated and subtracted from an ADC output. A pseudo aliasing generator consists of the Hadamard transform and a fixed FIR filter. In case of a two-channel 10-bit TI-ADC, the proposed technique reduces the requirement for a word length of the FIR filter by about 50% without a look-up table (LUT) compared with the conventional technique. In addition, the proposed technique requires only one FIR filter compared with the bank of adaptive filters which requires (M-1) FIR filters in an M-channel TI-ADC.
Adaptive Calibration of Channel Mismatches in Time-Interleaved ADCs Based on Equivalent Signal Recombination In this paper, we present an adaptive calibration method for correcting channel mismatches in time-interleaved analog-to-digital converters (TIADCs). An equivalent signal recombination structure is proposed to eliminate aliasing components when the input signal bandwidth is greater than the Nyquist bandwidth of the sub-ADCs in a TIADC. A band-limited pseudorandom noise sequence is employed as the desired output of the TIADC and simultaneously is also converted into an analog signal, which is injected into the TIADC as the training signal during the calibration process. Adaptive calibration filters with parallel structures are used to optimize the TIADC output to the desired output. The main advantage of the proposed method is that it avoids a complicated error estimation or measurement and largely reduces the computational complexity. A four-channel 12-bit 400-MHz TIADC and its calibration algorithm are implemented by hardware, and the measured spurious-free dynamic range is greater than 76 dB up 90% of the entire Nyquist band. The hardware implementation cost can be dramatically reduced, especially in instrumentation or measurement equipment applications, where special calibration phases and system stoppages are common.
Seven-bit 700-MS/s Four-Way Time-Interleaved SAR ADC With Partial $V_{\mathrm {cm}}$ -Based Switching. This brief presents a 7-bit 700-MS/s four-way time-interleaved successive approximation register (SAR) analog-to-digital converter (ADC). A partial Vcm-based switching method is proposed that requires less digital overhead from the SAR controller and achieves better conversion accuracy. Compared with switchback switching, the proposed method can further reduce the common mode variation by 50%. In ...
Low complexity digital background calibration algorithm for the correction of timing mismatch in time-interleaved ADCs. A low-complexity post-processing algorithm to estimate and compensate for timing skew error in a four-channel time-interleaved analog to digital converter (TIADC) is presented in this paper, together with its hardware implementation. The Lagrange interpolator is used as the reconstruction filter which alleviates online interpolator redesign by using a simplified representation of coefficients. Simulation results show that the proposed algorithm can suppress error tones for input signal frequency from 0 to 0.4fs. The proposed structure has, at least, 41% reduction in the number of required multipliers. Implementation of the algorithm, for a four-channel 10-bit TIADC, show that, for a 0.4fs input signal frequency, the Signal to Noise and Distortion Ratio (SNDR) and Spurious-Free Dynamic Range (SFDR) are improved 31.26 dB and 43.7 dB, respectively. Our proposed approximation technique does not degrade the performance of system, resulting in the same SNDR and SFDR as the exact coefficient values. In addition, the proposed structure provides an acceptable performance in the presence of wideband signals.
A Digital Adaptive Calibration Method Of Timing Mismatch In Tiadc Based On Adjacent Channels Lagrange Mean Value Difference This paper presents a digital adaptive calibration method to overcome the effect of timing mismatches in the time-interleaved analog-to-digital converter (TIADC). The structure of the channel splitting-recombining is proposed. The TIADC with M channels is divided into log(2)M stages for calibration, and each stage is composed of one or more two-channel sub-systems. The Lagrange mean value difference of adjacent channels is constructed by arithmetical approximation to estimate the timing mismatch. It does not need to oversample the bandlimited input signal with an oversampling ratio. And it does not require the input signal to have significant power over its bandwidth for identification. Timing mismatches are corrected by the cascaded differentiator-multipliers. A reuse structure is developed to reduce the consumption of hardware resources. Simulation results of the four-channel TIADC show that the spurious-free dynamic range (SFDR) can be improved from 31.19 to 73.39 dB. Hardware implementation based on FPGA is also carried out, and the corrected SFDR is measured at 72.29 dB. The approach has low complexity, and the bandwidth of the input signal can reach the Nyquist bandwidth of the complete TIADC system.
A Polynomial-Based Time-Varying Filter Structure for the Compensation of Frequency-Response Mismatch Errors in Time-Interleaved ADCs This paper introduces a structure for the compensation of frequency-response mismatch errors in M-channel time-interleaved analog-to-digital converters (ADCs). It makes use of a number of fixed digital filters, approximating differentiators of different orders, and a few variable multipliers that correspond to parameters in polynomial models of the channel frequency responses. Whenever the channel frequency responses change, which occurs from time to time in a practical time-interleaved ADC, it suffices to alter the values of these variable multipliers. In this way, expensive on-line filter design is avoided. The paper includes several design examples that illustrate the properties and capabilities of the proposed structure.
A 2.8 GS/s 44.6 mW Time-Interleaved ADC Achieving 50.9 dB SNDR and 3 dB Effective Resolution Bandwidth of 1.5 GHz in 65 nm CMOS. This paper presents a power- and area-efficient 24-way time-interleaved successive-approximation-register (SAR) analog-to-digital converter (ADC) that achieves 2.8 GS/s and 8.1 ENOB in 65 nm CMOS. To minimize the power and the area, the capacitors in the capacitive DAC are sized to meet the thermal noise requirements rather than the matching requirements, leading to the LSB capacitance of 50 aF. A...
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Efficient Processing of Deep Neural Networks: A Tutorial and Survey. Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and throughput without...
Directed diffusion for wireless sensor networking Advances in processor, memory, and radio technology will enable small and cheap nodes capable of sensing, communication, and computation. Networks of such nodes can coordinate to perform distributed sensing of environmental phenomena. In this paper, we explore the directed-diffusion paradigm for such coordination. Directed diffusion is data-centric in that all communication is for named data. All nodes in a directed-diffusion-based network are application aware. This enables diffusion to achieve energy savings by selecting empirically good paths and by caching and processing data in-network (e.g., data aggregation). We explore and evaluate the use of directed diffusion for a simple remote-surveillance sensor network analytically and experimentally. Our evaluation indicates that directed diffusion can achieve significant energy savings and can outperform idealized traditional schemes (e.g., omniscient multicast) under the investigated scenarios.
A process calculus for Mobile Ad Hoc Networks We present the @w-calculus, a process calculus for formally modeling and reasoning about Mobile Ad Hoc Wireless Networks (MANETs) and their protocols. The @w-calculus naturally captures essential characteristics of MANETs, including the ability of a MANET node to broadcast a message to any other node within its physical transmission range (and no others), and to move in and out of the transmission range of other nodes in the network. A key feature of the @w-calculus is the separation of a node's communication and computational behavior, described by an @w-process, from the description of its physical transmission range, referred to as an @w-process interface. Our main technical results are as follows. We give a formal operational semantics of the @w-calculus in terms of labeled transition systems and show that the state reachability problem is decidable for finite-control @w-processes. We also prove that the @w-calculus is a conservative extension of the @p-calculus, and that late bisimulation equivalence (appropriately lifted from the @p-calculus to the @w-calculus) is a congruence. Congruence results are also established for a weak version of late bisimulation equivalence, which abstracts away from two types of internal actions: @t-actions, as in the @p-calculus, and @m-actions, signaling node movement. We additionally define a symbolic semantics for the @w-calculus extended with the mismatch operator, along with a corresponding notion of symbolic bisimulation equivalence, and establish congruence results for this extension as well. Finally, we illustrate the practical utility of the calculus by developing and analyzing formal models of a leader election protocol for MANETs and the AODV routing protocol.
Chameleon: a dual-mode 802.11b/Bluetooth receiver system design In this paper, an approach to map the Bluetooth and 802.11b standards specifications into an architecture and specifications for the building blocks of a dual-mode direct conversion receiver is proposed. The design procedure focuses on optimizing the performance in each operating mode while attaining an efficient dual-standard solution. The impact of the expected receiver nonidealities and the characteristics of each building block are evaluated through bit-error-rate simulations. The proposed receiver design is verified through a fully integrated implementation from low-noise amplifier to analog-to-digital converter using IBM 0.25-μm BiCMOS technology. Experimental results from the integrated prototype meet the specifications from both standards and are in good agreement with the target sensitivity.
Understanding the regenerative comparator circuit The regenerative comparator circuit which lies at the heart of A/D conversion, slicer circuits, and memory sensing, is unstable, time-varying, nonlinear, and with multiple equilibria. That does not mean, as this paper shows, that it cannot be understood with simple equivalent circuits that reveal its dynamics completely, and enable it to be designed to specifications on static and dynamic offset and noise. The analysis is applied to the StrongArm latch.
A Bidirectional Neural Interface IC With Chopper Stabilized BioADC Array and Charge Balanced Stimulator. We present a bidirectional neural interface with a 4-channel biopotential analog-to-digital converter (bioADC) and a 4-channel current-mode stimulator in 180 nm CMOS. The bioADC directly transduces microvolt biopotentials into a digital representation without a voltage-amplification stage. Each bioADC channel comprises a continuous-time first-order ΔΣ modulator with a chopper-stabilized OTA input ...
1.042157
0.033704
0.033333
0.033333
0.033333
0.020741
0.006284
0
0
0
0
0
0
0
A 746 nW ECG Processor ASIC Based on Ternary Neural Network This paper presents an ultra-low power electrocardiography (ECG) processor application-specific integrated circuit (ASIC) for the real-time detection of abnormal cardiac rhythms (ACRs). The proposed ECG processor can support wearable or implantable ECG devices for long-term health monitoring. It adopts a derivative-based patient adaptive threshold approach to detect the R peaks in the PQRST complex of ECG signals. Two tiny machine learning classifiers are used for the accurate classification of ACRs. A 3-layer feed-forward ternary neural network (TNN) is designed, which classifies the QRS complex's shape, followed by the adaptive decision logics (DL). The proposed processor requires only 1 KB on-chip memory to store the parameters and ECG data required by the classifiers. The ECG processor has been implemented based on fully-customized near-threshold logic cells using thick-gate transistors in 65-nm CMOS technology. The ASIC core occupies a die area of 1.08 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> . The measured total power consumption is 746 nW, with 0.8 V power supply at 2.5 kHz real-time operating clock. It can detect 13 abnormal cardiac rhythms with a sensitivity and specificity of 99.10% and 99.5%. The number of detectable ACR types far exceeds the other low power designs in the literature.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Work-Efficient Parallel GPU Methods for Single-Source Shortest Paths Finding the shortest paths from a single source to all other vertices is a fundamental method used in a variety of higher-level graph algorithms. We present three parallel friendly and work-efficient methods to solve this Single-Source Shortest Paths (SSSP) problem: Work front Sweep, Near-Far and Bucketing. These methods choose different approaches to balance the trade off between saving work and organizational overhead. In practice, all of these methods do much less work than traditional Bellman-Ford methods, while adding only a modest amount of extra work over serial methods. These methods are designed to have a sufficient parallel workload to fill modern massively-parallel machines, and select reorganizational schemes that map well to these architectures. We show that in general our Near-Far method has the highest performance on modern GPUs, outperforming other parallel methods. We also explore a variety of parallel load-balanced graph traversal strategies and apply them towards our SSSP solver. Our work-saving methods always outperform a traditional GPU Bellman-Ford implementation, achieving rates up to 14x higher on low-degree graphs and 340x higher on scale free graphs. We also see significant speedups (20-60x) when compared against a serial implementation on graphs with adequately high degree.
A fast GPU algorithm for graph connectivity Graphics processing units provide a large compu- tational power at a very low price which position them as an ubiquitous accelerator. General purpose programming on the graphics processing units (GPGPU) is best suited for regular data parallel algorithms. They are not directly amenable for algorithms which have irregular data access patterns such as list ranking, and finding the connected components of a graph, and the like. In this work, we present a GPU-optimized implementation for finding the connected components of a given graph. Our implementation tries to minimize the impact of irregularity, both at the data level and functional level. Our implementation achieves a speed up of 9 to 12 times over the best sequential CPU implementation. For instance, our implementation finds connected components of a graph of 10 million nodes and 60 million edges in about 500 milliseconds on a GPU, given a random edge list. We also draw interesting observations on why PRAM algorithms, such as the Shiloach- Vishkin algorithm may not be a good fit for the GPU and how they should be modified.
Enterprise: breadth-first graph traversal on GPUs The Breadth-First Search (BFS) algorithm serves as the foundation for many graph-processing applications and analytics workloads. While Graphics Processing Unit (GPU) offers massive parallelism, achieving high-performance BFS on GPUs entails efficient scheduling of a large number of GPU threads and effective utilization of GPU memory hierarchy. In this paper, we present Enterprise, a new GPU-based BFS system that combines three techniques to remove potential performance bottlenecks: (1) streamlined GPU threads scheduling through constructing a frontier queue without contention from concurrent threads, yet containing no duplicated frontiers and optimized for both top-down and bottom-up BFS. (2) GPU workload balancing that classifies the frontiers based on different out-degrees to utilize the full spectrum of GPU parallel granularity, which significantly increases thread-level parallelism; and (3) GPU based BFS direction optimization quantifies the effect of hub vertices on direction-switching and selectively caches a small set of critical hub vertices in the limited GPU shared memory to reduce expensive random data accesses. We have evaluated Enterprise on a large variety of graphs with different GPU devices. Enterprise achieves up to 76 billion traversed edges per second (TEPS) on a single NVIDIA Kepler K40, and up to 122 billion TEPS on two GPUs that ranks No. 45 in the Graph 500 on November 2014. Enterprise is also very energy-efficient as No. 1 in the GreenGraph 500 (small data category), delivering 446 million TEPS per watt.
Accelerating sparse matrix-vector multiplication on GPUs using bit-representation-optimized schemes The sparse matrix-vector (SpMV) multiplication routine is an important building block used in many iterative algorithms for solving scientific and engineering problems. One of the main challenges of SpMV is its memory-boundedness. Although compression has been proposed previously to improve SpMV performance on CPUs, its use has not been demonstrated on the GPU because of the serial nature of many compression and decompression schemes. In this paper, we introduce a family of bit-representation-optimized (BRO) compression schemes for representing sparse matrices on GPUs. The proposed schemes, BRO-ELL, BRO-COO, and BRO-HYB, perform compression on index data and help to speed up SpMV on GPUs through reduction of memory traffic. Furthermore, we formulate a BRO-aware matrix reodering scheme as a data clustering problem and use it to increase compression ratios. With the proposed schemes, experiments show that average speedups of 1.5× compared to ELLPACK and HYB can be achieved for SpMV on GPUs.
High Performance Exact Triangle Counting on GPUs. This paper presents a GPU implementation of the graph triangle counting operation based on the set intersection algorithm. The algorithm is implemented in four kernels optimized for different types of graphs in a code delivering performance higher than the current state-of-the-art and without preprocessing the input graph. At runtime, a lightweight heuristic is used to select the kernel to run bas...
Mathematical foundations of the GraphBLAS The GraphBLAS standard (GraphBlas.org) is being developed to bring the potential of matrix-based graph algorithms to the broadest possible audience. Mathematically, the GraphBLAS defines a core set of matrix-based graph operations that can be used to implement a wide class of graph algorithms in a wide range of programming environments. This paper provides an introduction to the mathematics of the GraphBLAS. Graphs represent connections between vertices with edges. Matrices can represent a wide range of graphs using adjacency matrices or incidence matrices. Adjacency matrices are often easier to analyze while incidence matrices are often better for representing data. Fortunately, the two are easily connected by matrix multiplication. A key feature of matrix mathematics is that a very small number of matrix operations can be used to manipulate a very wide range of graphs. This composability of a small number of operations is the foundation of the GraphBLAS. A standard such as the GraphBLAS can only be effective if it has low performance overhead. Performance measurements of prototype GraphBLAS implementations indicate that the overhead is low.
A scalable processing-in-memory accelerator for parallel graph processing The explosion of digital data and the ever-growing need for fast data analysis have made in-memory big-data processing in computer systems increasingly important. In particular, large-scale graph processing is gaining attention due to its broad applicability from social science to machine learning. However, scalable hardware design that can efficiently process large graphs in main memory is still an open problem. Ideally, cost-effective and scalable graph processing systems can be realized by building a system whose performance increases proportionally with the sizes of graphs that can be stored in the system, which is extremely challenging in conventional systems due to severe memory bandwidth limitations. In this work, we argue that the conventional concept of processing-in-memory (PIM) can be a viable solution to achieve such an objective. The key modern enabler for PIM is the recent advancement of the 3D integration technology that facilitates stacking logic and memory dies in a single package, which was not available when the PIM concept was originally examined. In order to take advantage of such a new technology to enable memory-capacity-proportional performance, we design a programmable PIM accelerator for large-scale graph processing called Tesseract. Tesseract is composed of (1) a new hardware architecture that fully utilizes the available memory bandwidth, (2) an efficient method of communication between different memory partitions, and (3) a programming interface that reflects and exploits the unique hardware design. It also includes two hardware prefetchers specialized for memory access patterns of graph processing, which operate based on the hints provided by our programming model. Our comprehensive evaluations using five state-of-the-art graph processing workloads with large real-world graphs show that the proposed architecture improves average system performance by a factor of ten and achieves 87% average energy reduction over conventional systems.
The architecture of the DIVA processing-in-memory chip The DIVA (Data IntensiVe Architecture) system incorporates a collection of Processing-In-Memory (PIM) chips as smart-memory co-processors to a conventional microprocessor. We have recently fabricated prototype DIVA PIMs. These chips represent the first smart-memory devices designed to support virtual addressing and capable of executing multiple threads of control. In this paper, we describe the prototype PIM architecture. We emphasize three unique features of DIVA PIMs, namely, the memory interface to the host processor, the 256-bit wide datapaths for exploiting on-chip bandwidth, and the address translation unit. We present detailed simulation results on eight benchmark applications. When just a single PIM chip is used, we achieve an average speedup of 3.3X over host-only execution, due to lower memory stall times and increased fine-grain parallelism. These 1-PIM results suggest that a PIM-based architecture with many such chips yields significantly higher performance than a multiprocessor of a similar scale and at a much reduced hardware cost.
Rowhammer.js: A remote software-induced fault attack in JavaScript A fundamental assumption in software security is that a memory location can only be modified by processes that may write to this memory location. However, a recent study has shown that parasitic effects in DRAM can change the content of a memory cell without accessing it, but by accessing other memory locations in a high frequency. This so-called Rowhammer bug occurs in most of today's memory modules and has fatal consequences for the security of all affected systems, e.g., privilege escalation attacks. All studies and attacks related to Rowhammer so far rely on the availability of a cache flush instruction in order to cause accesses to DRAM modules at a sufficiently high frequency. We overcome this limitation by defeating complex cache replacement policies. We show that caches can be forced into fast cache eviction to trigger the Rowhammer bug with only regular memory accesses. This allows to trigger the Rowhammer bug in highly restricted and even scripting environments. We demonstrate a fully automated attack that requires nothing but a website with JavaScript to trigger faults on remote hardware. Thereby we can gain unrestricted access to systems of website visitors. We show that the attack works on off-the-shelf systems. Existing countermeasures fail to protect against this new Rowhammer attack.
The geometry of innocent flesh on the bone: return-into-libc without function calls (on the x86) We present new techniques that allow a return-into-libc attack to be mounted on x86 executables that calls no functions at all. Our attack combines a large number of short instruction sequences to build gadgets that allow arbitrary computation. We show how to discover such instruction sequences by means of static analysis. We make use, in an essential way, of the properties of the x86 instruction set.
Advancing wireless link signatures for location distinction Location distinction is the ability to determine when a device has changed its position. We explore the opportunity to use sophisticated PHY-layer measurements in wireless networking systems for location distinction. We first compare two existing location distinction methods - one based on channel gains of multi-tonal probes, and another on channel impulse response. Next, we combine the benefits of these two methods to develop a new link measurement that we call the complex temporal signature. We use a 2.4 GHz link measurement data set, obtained from CRAWDAD [10], to evaluate the three location distinction methods. We find that the complex temporal signature method performs significantly better compared to the existing methods. We also perform new measurements to understand and model the temporal behavior of link signatures over time. We integrate our model in our location distinction mechanism and significantly reduce the probability of false alarms due to temporal variations of link signatures.
Concept and design of a cognitive radio prototyping platform The increasing demand for higher data rates in communication systems requires an efficient utilization of the limited frequency spectrum resource. Therefore, cognitive radio aspects are of utmost interest. The authors propose a reconfigurable cognitive radio platform which combines the advantages of the flexibility of a digital signal processor with the efficient parallelization capabilities of a field-programmable gate array. Since the ever increasing computational complexity complicates the dimensioning of such platforms, scalability and modularity was a crucial design constraint for the presented concept.
Agent-based modeling and simulation of a smart grid: A case study of communication effects on frequency control. A smart grid is the next generation power grid focused on providing increased reliability and efficiency in the wake of integration of volatile distributed energy resources. For the development of the smart grid, the modeling and simulation infrastructure is an important concern. This study presents an agent-based model for simulating different smart grid frequency control schemes, such as demand response. The model can be used for combined simulation of electrical, communication and control dynamics. The model structure is presented in detail, and the applicability of the model is evaluated with four distinct simulation case examples. The study confirms that an agent-based modeling and simulation approach is suitable for modeling frequency control in the smart grid. Additionally, the simulations indicate that demand response could be a viable alternative for providing primary control capabilities to the smart grid, even when faced with communication constraints.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.103633
0.103633
0.103633
0.1
0.1
0.069089
0.000142
0.000025
0
0
0
0
0
0
Robust fuzzy tracking control for robotic manipulators In this paper, a stable adaptive fuzzy-based tracking control is developed for robot systems with parameter uncertainties and external disturbance. First, a fuzzy logic system is introduced to approximate the unknown robotic dynamics by using adaptive algorithm. Next, the effect of system uncertainties and external disturbance is removed by employing an integral sliding mode control algorithm. Consequently, a hybrid fuzzy adaptive robust controller is developed such that the resulting closed-loop robot system is stable and the trajectory tracking performance is guaranteed. The proposed controller is appropriate for the robust tracking of robotic systems with system uncertainties. The validity of the control scheme is shown by computer simulation of a two-link robotic manipulator.
Type-2 Fuzzy Sets and Systems: An Overview [corrected reprint] As originally published in the February 2007 issue of IEEE Computational Intelligence Magazine, the above titled paper (ibid., vol. 2, no. 1, pp. 20-29, Feb 07) contained errors in mathematics that were introduced by the publisher. The corrected version is reprinted in its entirety.
Stability of switched positive linear systems with average dwell time switching. In this paper, the stability analysis problem for a class of switched positive linear systems (SPLSs) with average dwell time switching is investigated. A multiple linear copositive Lyapunov function (MLCLF) is first introduced, by which the sufficient stability criteria in terms of a set of linear matrix inequalities, are given for the underlying systems in both continuous-time and discrete-time contexts. The stability results for the SPLSs under arbitrary switching, which have been previously studied in the literature, can be easily obtained by reducing MLCLF to the common linear copositive Lyapunov function used for the system under arbitrary switching those systems. Finally, a numerical example is given to show the effectiveness and advantages of the proposed techniques.
Output tracking control for a class of continuous-time T-S fuzzy systems This paper investigates the problem of output tracking for nonlinear systems with actuator fault using interval type-2 (IT2) fuzzy model approach. An IT2 state-feedback fuzzy controller is designed to perform the tracking control problem, where the membership functions can be freely chosen since the number of fuzzy rules is different from that of the IT2 T-S fuzzy model. Based on Lyapunov stability theory, an existence condition of IT2 fuzzy H ∞ output tracking controller is obtained to guarantee that the output of the closed-loop IT2 control system can track the output of a given reference model well in the H ∞ sense. Finally, two illustrative examples are given to demonstrate the effectiveness and merits of the proposed design techniques.
Adaptive Fault-Tolerant Tracking Control for Discrete-Time Multiagent Systems via Reinforcement Learning Algorithm This article investigates the adaptive fault-tolerant tracking control problem for a class of discrete-time multiagent systems via a reinforcement learning algorithm. The action neural networks (NNs) are used to approximate unknown and desired control input signals, and the critic NNs are employed to estimate the cost function in the design procedure. Furthermore, the direct adaptive optimal controllers are designed by combining the backstepping technique with the reinforcement learning algorithm. Comparing the existing reinforcement learning algorithm, the computational burden can be effectively reduced by using the method of less learning parameters. The adaptive auxiliary signals are established to compensate for the influence of the dead zones and actuator faults on the control performance. Based on the Lyapunov stability theory, it is proved that all signals of the closed-loop system are semiglobally uniformly ultimately bounded. Finally, some simulation results are presented to illustrate the effectiveness of the proposed approach.
An Efficient High-Order Alpha-Plane Aggregation In General Type-2 Fuzzy Systems Using Newton-Cotes Rules Nowadays, general type-2 fuzzy systems are an attractive alternative for non-linear control problems because they provide good robustness in real-world environments, where there exist many noise sources. This kind of fuzzy systems can have better performance in comparison to type-1 fuzzy systems as they offer uncertainty handling capabilities. However, one of the main problems in implementing general type-2 fuzzy systems is their elevated computational cost. The computational cost of a general type-2 fuzzy system depends on the representation that is used, for example, the alpha-planes representation consists on a discretization of general type-2 fuzzy systems into several horizontal slices called alpha-planes, then solving every alpha-plane as an interval type-2 fuzzy system and after that the integration of the results to approximate a general type-2 fuzzy system. The main contribution of this work is the proposed computational cost reduction of the alpha-planes representation by optimizing the alpha-planes integration process based on the composite Newton-Cotes rules. In this way, the number of alpha-planes required for a good approximation is reduced, and the computational cost is also reduced by introducing new equations for the alpha-planes aggregation. Finally, a systematic comparative analysis of the improvement offered by the proposed method with respect to the conventional approach is presented.
Tracking control design of interval type-2 polynomial-fuzzy-model-based systems with time-varying delay. In this paper, the tracking control design for the interval type-2 (IT2) polynomial-fuzzy-model-based (PFMB) control system subject to time-varying delay situation is investigated. The tracking control system is formed of the IT2 polynomial fuzzy model representing a nonlinear system with time-varying delay, the stable reference model and the IT2 polynomial fuzzy controller. The control objective is to design a proper IT2 polynomial fuzzy controller which is capable of driving the states of the polynomial fuzzy model to track those in the reference model and the tracking performance is evaluated and improved by the H∞ performance index. Also, to handle the uncertainty in the membership functions, the property of IT2 fuzzy sets is utilized to enhance the fuzzy controller’s robustness against uncertainty. In addition, considering the effect of time-varying delay, the Lyapunov–Krasovskii functional based approach is adopted to facilitate the delay-dependent stability analysis. Stability conditions depending on the time-varying delay characteristic with the consideration of H∞ performance are obtained in terms of sum-of-squares (SOS). Furthermore, the information of the IT2 membership functions is employed in the stability analysis to relax the stability conditions, both membership-function-independent (MFI) and membership-function-dependent (MFD) approaches are presented to develop the stability conditions. Simulation examples are presented to verify the effectiveness of the proposed tracking control approach.
On sampled-data fuzzy control design approach for T-S model-based fuzzy systems by using discretization approach. In this paper, a sampled-data fuzzy control design approach for Takagi-Sugeno (T-S) model-based fuzzy systems is proposed. Firstly, the T-S model-based fuzzy systems are approximated by T-S fuzzy model-based discrete systems with parameter uncertainties. Then a sufficient condition on the existence of a sampled-data fuzzy controller for the fuzzy systems is formulated in the form of linear matrix inequalities which are proposed in some symmetrical form. So the proposed fuzzy control design approach will be less conservative. And some problems in the existing literature can be solved. Finally, a simulation example is discussed to show the effectiveness of the obtained fuzzy control design approach in this paper.
Cache operations by MRU change The performance of set associative caches is analyzed. The method used is to group the cache lines into regions according to their positions in the replacement stacks of a cache, and then to observe how the memory access of a CPU is distributed over these regions. Results from the preserved CPU traces show that the memory accesses are heavily concentrated on the most recently used (MRU) region in the cache. The concept of MRU change is introduced; the idea is to use the event that the CPU accesses a non-MRU line to approximate the time the CPU is changing its working set. The concept is shown to be useful in many aspects of cache design and performance evaluation, such as comparison of various replacement algorithms, improvement of prefetch algorithms, and speedup of cache simulation.
Baring It All to Software: Raw Machines The most radical of the architectures that appear in this issue are Raw processors-highly parallel architectures with hundreds of very simple processors coupled to a small portion of the on-chip memory. Each processor, or tile, also contains a small bank of configurable logic, allowing synthesis of complex operations directly in configurable hardware. Unlike the others, this architecture does not use a traditional instruction set architecture. Instead, programs are compiled directly onto the Raw hardware, with all units told explicitly what to do by the compiler. The compiler even schedules most of the intertile communication. The real limitation to this architecture is the efficacy of the compiler. The authors demonstrate impressive speedups for simple algorithms that lend themselves well to this architectural model, but whether this architecture will be effective for future workloads is an open question.
A capacitor-free CMOS low-dropout regulator with damping-factor-control frequency compensation A 1.5-V 100-mA capacitor-free CMOS low-dropout regulator (LDO) for system-on-chip applications to reduce board space and external pins is presented. By utilizing damping-factor-control frequency compensation on the advanced LDO structure, the proposed LDO provides high stability, as well as fast line and load transient responses, even in capacitor-free operation. The proposed LDO has been implemented in a commercial 0.6-μm CMOS technology, and the active chip area is 568 μm×541 μm. The total error of the output voltage due to line and load variations is less than ±0.25%, and the temperature coefficient is 38 ppm/°C. Moreover, the output voltage can recover within 2 μs for full load-current changes. The power-supply rejection ratio at 1 MHz is -30 dB, and the output noise spectral densities at 100 Hz and 100 kHz are 1.8 and 0.38 μV/√Hz, respectively.
22.7-dB Gain <formula formulatype="inline"><tex Notation="TeX">$-$</tex></formula>19.7-dBm <formula formulatype="inline"><tex Notation="TeX">$ICP_{1{\rm dB}}$</tex></formula> UWB CMOS LNA A fully differential CMOS ultrawideband low-noise amplifier (LNA) is presented. The LNA has been realized in a standard 90-nm CMOS technology and consists of a common-gate stage and two subsequent common-source stages. The common-gate input stage realizes a wideband input impedance matching to the source impedance of the receiver (i.e., the antenna), whereas the two subsequent common-source stages...
A Hybrid Threshold Self-Compensation Rectifier For Rf Energy Harvesting This paper presents a novel highly efficient 5-stage RF rectifier in SMIC 65 nm standard CMOS process. To improve power conversion efficiency (PCE) and reduce the minimum input voltage, a hybrid threshold self-compensation approach is applied in this proposed RF rectifier, which combines the gate-bias threshold compensation with the body-effect compensation. The proposed circuit uses PMOSFET in all the stages except for the first stage to allow individual body-bias, which eliminates the need for triple-well technology. The presented RF rectifier exhibits a simulated maximum PCE of 30% at -16.7dBm (20.25 mu W) and produces 1.74V across 0.5 M Omega load resistance. In the circumstances of 1 M Omega load resistance, it outputs 1.5 V DC voltage from a remarkably low input power level of -20.4 dBm (9 mu W) RF input power with PCE of about 25%.
A Bidirectional Neural Interface IC With Chopper Stabilized BioADC Array and Charge Balanced Stimulator. We present a bidirectional neural interface with a 4-channel biopotential analog-to-digital converter (bioADC) and a 4-channel current-mode stimulator in 180 nm CMOS. The bioADC directly transduces microvolt biopotentials into a digital representation without a voltage-amplification stage. Each bioADC channel comprises a continuous-time first-order ΔΣ modulator with a chopper-stabilized OTA input ...
1.103989
0.1
0.1
0.1
0.1
0.1
0.0312
0.007044
0
0
0
0
0
0
Improving the Performance of a Line Regulating Converter in a Converter-Dominated DC Microgrid System This paper describes the controller design procedure for a line-regulating converter in a converter-dominated dc microgrid system. The purpose of the controller is to mitigate the effects of the constant power loads on the stability and performance of the dc microgrid system. In this work, first, the overall structure, operation, and building blocks of the dc microgrid under analysis are introduced. Next, the dynamic model of the dc microgrid and the required transfer functions for the controller design are derived. Then, two proposed controller design methods are introduced and implemented on an virtual line-regulating converter in a dc microgrid system. Finally, the controller design methods are verified experimentally using the results from a built prototype dc microgrid system.
Machine-to-machine communications for home energy management system in smart grid. Machine-to-machine (M2M) communications have emerged as a cutting edge technology for next-generation communications, and are undergoing rapid development and inspiring numerous applications. This article presents an investigation of the application of M2M communications in the smart grid. First, an overview of M2M communications is given. The enabling technologies and open research issues of M2M ...
Reduced-Order Model and Stability Analysis of Low-Voltage DC Microgrid. Depleting fossil fuels, increasing energy demand, and need for high-reliability power supply motivate the use of dc microgrids. This paper analyzes the stability of low-voltage dc microgrid systems. Sources are controlled using a droop-based decentralized controller. Various components of the system have been modeled. A linearized system model is derived using small-signal approximation. The stability of the system is analyzed by identifying the eigenvalues of the system matrix. The sufficiency condition for stable operation of the system is derived. It provides upper bound on droop constants and is useful during planning and designing of dc microgrids. Furthermore, the sensitivity of system poles to variation in cable resistance and inductance is identified. It is proved that the poles move further inside the negative real plane with a decrease in inductance or an increase in resistance. The method proposed in this paper is applicable to any interconnecting structure of sources and loads. The results obtained by analysis are verified by detailed simulation study. Root locus plots are included to confirm the movement of system poles. The viability of the model is confirmed by experimental results from a scaled-down laboratory prototype of a dc microgrid developed for the purpose.
Planning of the DC System Considering Restrictions on the Small-Signal Stability of EV Charging Stations and Comparison Between Series and Parallel Connections Series and parallel connections are proposed for electric vehicle charging stations (EVCSs) to satisfy the demands of large EVs. In this study, simplified linearized models of a DC network integrated with large EVCSs, which adopts series and parallel connections (SC-EVCS and PC-EVCS, respectively), are established. In this study, the control subsystem of each EVCS is demonstrated to have weak interaction with the filter subsystem and the DC network. Moreover, a method for designing proper parameters to ensure self-stability of the EVCS is proposed. Additionally, when multiple EVCSs are connected to the DC network, the interaction among the connected EVCSs may decrease the DC system stability. Therefore, the positive-net-damping stability criterion is used to analyze the small-signal stability of the DC system, confirming that the instability of the DC system is caused by two factors, the filter subsystem of the EVCS and the equivalent impedance of the DC network. Considering the restrictions on stability, the maximum number of connections of SC-EVCSs and PC-EVCSs should be limited in planning. Furthermore, the study theoretically proves that compared with PC-EVCS, SC-EVCS can service more EVCSs to the DC network while ensuring stability. Finally, an example using MATLAB is presented to demonstrate the analytical conclusions.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
How to share a secret In this paper we show how to divide data D into n pieces in such a way that D is easily reconstructable from any k pieces, but even complete knowledge of k - 1 pieces reveals absolutely no information about D. This technique enables the construction of robust key management schemes for cryptographic systems that can function securely and reliably even when misfortunes destroy half the pieces and security breaches expose all but one of the remaining pieces.
A new approach to state observation of nonlinear systems with delayed output The article presents a new approach for the construction of a state observer for nonlinear systems when the output measurements are available for computations after a nonnegligible time delay. The proposed observer consists of a chain of observation algorithms reconstructing the system state at different delayed time instants (chain observer). Conditions are given for ensuring global exponential convergence to zero of the observation error for any given delay in the measurements. The implementation of the observer is simple and computer simulations demonstrate its effectiveness.
Theory and Applications of Robust Optimization In this paper we survey the primary research, both theoretical and applied, in the area of robust optimization (RO). Our focus is on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying prominent theoretical results of RO, we also present some recent results linking RO to adaptable models for multistage decision-making problems. Finally, we highlight applications of RO across a wide spectrum of domains, including finance, statistics, learning, and various areas of engineering.
Cache attacks and countermeasures: the case of AES We describe several software side-channel attacks based on inter-process leakage through the state of the CPU’s memory cache. This leakage reveals memory access patterns, which can be used for cryptanalysis of cryptographic primitives that employ data-dependent table lookups. The attacks allow an unprivileged process to attack other processes running in parallel on the same processor, despite partitioning methods such as memory protection, sandboxing and virtualization. Some of our methods require only the ability to trigger services that perform encryption or MAC using the unknown key, such as encrypted disk partitions or secure network links. Moreover, we demonstrate an extremely strong type of attack, which requires knowledge of neither the specific plaintexts nor ciphertexts, and works by merely monitoring the effect of the cryptographic process on the cache. We discuss in detail several such attacks on AES, and experimentally demonstrate their applicability to real systems, such as OpenSSL and Linux’s dm-crypt encrypted partitions (in the latter case, the full key can be recovered after just 800 writes to the partition, taking 65 milliseconds). Finally, we describe several countermeasures for mitigating such attacks.
A normal form for XML documents This paper takes a first step towards the design and normalization theory for XML documents. We show that, like relational databases, XML documents may contain redundant information, and may be prone to update anomalies. Furthermore, such problems are caused by certain functional dependencies among paths in the document. Our goal is to find a way of converting an arbitrary DTD into a well-designed one, that avoids these problems. We first introduce the concept of a functional dependency for XML, and define its semantics via a relational representation of XML. We then define an XML normal form, XNF, that avoids update anomalies and redundancies. We study its properties and show that it generalizes BCNF and a normal form for nested relations when those are appropriately coded as XML documents. Finally, we present a lossless algorithm for converting any DTD into one in XNF.
Synchronization via Pinning Control on General Complex Networks. This paper studies synchronization via pinning control on general complex dynamical networks, such as strongly connected networks, networks with a directed spanning tree, weakly connected networks, and directed forests. A criterion for ensuring network synchronization on strongly connected networks is given. It is found that the vertices with very small in-degrees should be pinned first. In addition, it is shown that the original condition with controllers can be reformulated such that it does not depend on the form of the chosen controllers, which implies that the vertices with very large out-degrees may be pinned. Then, a criterion for achieving synchronization on networks with a directed spanning tree, which can be composed of many strongly connected components, is derived. It is found that the strongly connected components with very few connections from other components should be controlled and the components with many connections from other components can achieve synchronization even without controls. Moreover, a simple but effective pinning algorithm for reaching synchronization on a general complex dynamical network is proposed. Finally, some simulation examples are given to verify the proposed pinning scheme.
Digital signal processors in cellular radio communications Contemporary wireless communications are based on digital communications technologies. The recent commercial success of mobile cellular communications has been enabled in part by successful designs of digital signal processors with appropriate on-chip memories and specialized accelerators for digital transceiver operations. This article provides an overview of fixed point digital signal processors and ways in which they are used in cellular communications. Directions for future wireless-focused DSP technology developments are discussed
Optimum insertion/deletion point selection for fractional sample rate conversion In this paper, an optimum insertion/deletion point selection algorithm for fractional sample rate conversion (SRC) is proposed. The direct insertion/deletion technique achieves low complexity and low power consumption as compared to the other fractional SRC methods. Using a multiple set insertion/deletion technique is efficient for reduction of distortion caused by the insertion/deletion step. When the conversion factor is (N ±¿)/N, the number of possible patterns of insertion/deletion points and the number of combinations for multiple set inserters/deleters grow as ¿ increases. The proposed algorithm minimizes the distortion due to SRC by selecting the patterns and the combinations.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
0
0
Disturbance rejection in nonlinear systems based on equivalent-input-disturbance approach. This paper presents a new system configuration and a design method that improves disturbance rejection performance for a nonlinear system. The equivalent-input-disturbance (EID) approach is used to construct an EID estimator that estimates the influence of exogenous disturbances and nonlinearities on the output of the system. Sufficient stability conditions for state- and output-feedback control are derived in terms of linear matrix inequalities. New EID-based control laws that combines an EID estimate with a state- or an output-feedback control laws ensure good control performance. A numerical example illustrates the design method. A comparison between the EID-based control, the conventional disturbance observer, the disturbance-observer-based-control, and the sliding mode control methods demonstrates the validity and superiority of the EID-based control method.
Offset-Free Model Predictive Control for the Power Control of Three-Phase AC/DC Converters This paper describes an offset-free model predictive control (MPC) algorithm using a disturbance observer (DOB) to control the active/reactive powers of a three-phase AC/DC converter. The strategy of this paper is twofold. One is the use of DOB to remove the offset error and the other is the proper choice of the weighting matrices of a cost index to provide fast error decay with small overshoot. The DOB is designed to estimate the unknown disturbances of the AC/DC converter following the standard Luenberger observer design procedure. The proposed MPC minimizes a one-step-ahead cost index penalizing the predicted tracking error by performing a simple membership test without any use of numerical methods. A systematic way for choosing the weights of the cost index, which guarantees the global stability of the closed-loop system, is proposed. Use of the DOB eliminates the offset tracking errors in the real implementation. Using a 25 􀀀 kW AC/DC converter, it is experimentally shown that the proposed MPC enhances the power tracking performance while considerably reducing the mutual interference of the active/reactive powers as well as the output voltage.
Further results on cloud control systems. This paper is devoted to further investigating the cloud control systems (CCSs). The benefits and challenges of CCSs are provided. Both new research results of ours and some typical work made by other researchers are presented. It is believed that the CCSs can have huge and promising effects due to their potential advantages.
Moving Horizon Estimation for Mobile Robots With Multirate Sampling. This paper investigates the multirate moving horizon estimation (MMHE) problem for mobile robots with inertial sensor and camera, where the sampling rates of the sensors are not identical. In the sense of the multirate systems, some sensors may have no measurements at certain sampling times, which can be regarded as measurement missing and may significantly degrade the estimation performance. A bi...
Real-Time Switched Model Predictive Control for a Cyber-Physical Wind Turbine Emulator The high complexity and nonlinearity of wind turbine (WT) systems impose the utilization of rigorous control methods such as model predictive control (MPC). MPC algorithms are computationally intensive requiring investigation of real-time implementability and feasibility in addition to control performance metrics. In this article, a switched model predictive controller (SMPC) is developed, implemented, and investigated for control objectives performance and real-time metrics. This article has two main contributions. First, embedded real-time SMPC is developed using qpOASES as an embedded solver for the online optimal control problem. Second, a cyber-physical real-time emulator for variable-speed variable-pitch utility-scale WT is developed and implemented on an xPC target machine using a high-fidelity linear parameter-varying model. The SMPC is evaluated on the fatigue, aerodynamics, structures, and turbulence (FAST) design code and the real-time emulator. The analysis and investigation of results highlight the feasibility and capability of SMPC for handling control objectives of WT systems within real-time using short control periods.
Network-Induced Constraints in Networked Control Systems—A Survey Networked control systems (NCSs) have, in recent years, brought many innovative impacts to control systems. However, great challenges are also met due to the network-induced imperfections. Such network-induced imperfections are handled as various constraints, which should appropriately be considered in the analysis and design of NCSs. In this paper, the main methodologies suggested in the literature to cope with typical network-induced constraints, namely time delays, packet losses and disorder, time-varying transmission intervals, competition of multiple nodes accessing networks, and data quantization are surveyed; the constraints suggested in the literature on the first two types of constraints are updated in different categorizing ways; and those on the latter three types of constraints are extended.
Memoryless Approach to the LQ and LQG Problems with Variable Input Delay This note studies the LQ and LQG problems for linear time invariant systems with a single time-varying input delay and instantaneous (memoryless) state feedback. We extend the memoryless state feedback solution proposed in [1] in two directions. We prove that in the deterministic case a memoryless state feedback can be in general optimal only up to a certain delay, for which we provide a sufficient, and sometimes strict, bound. Moreover, we show that this memoryless control is optimal also in the case of time-varying delays and that the quadratic cost functional has the same value as in the case without delay. For time varying delays the control law requires that the relationship between time points in which the input is generated and applied is known and invertible even if the delay function needs not to be differentiable or even continuous. Finally, we prove that the cost functional is bounded also in the stochastic case for the same delay interval as in the deterministic case, but with a larger cost than the delay-less LQG solution.
Software complexity measurement Inappropriate use of software complexity measures can have large, damaging effects by rewarding poor programming practices and demoralizing good programmers. Software complexity measures must be critically evaluated to determine the ways in which they can best be used.
Joint Optimization of Task Scheduling and Image Placement in Fog Computing Supported Software-Defined Embedded System. Traditional standalone embedded system is limited in their functionality, flexibility, and scalability. Fog computing platform, characterized by pushing the cloud services to the network edge, is a promising solution to support and strengthen traditional embedded system. Resource management is always a critical issue to the system performance. In this paper, we consider a fog computing supported software-defined embedded system, where task images lay in the storage server while computations can be conducted on either embedded device or a computation server. It is significant to design an efficient task scheduling and resource management strategy with minimized task completion time for promoting the user experience. To this end, three issues are investigated in this paper: 1) how to balance the workload on a client device and computation servers, i.e., task scheduling, 2) how to place task images on storage servers, i.e., resource management, and 3) how to balance the I/O interrupt requests among the storage servers. They are jointly considered and formulated as a mixed-integer nonlinear programming problem. To deal with its high computation complexity, a computation-efficient solution is proposed based on our formulation and validated by extensive simulation based studies.
Communication-efficient leader election and consensus with limited link synchrony We study the degree of synchrony required to implement the leader election failure detector Ω and to solve consensus in partially synchronous systems. We show that in a system with n processes and up to f process crashes, one can implement Ω and solve consensus provided there exists some (unknown) correct process with f outgoing links that are eventually timely. In the special case where f = 1 , an important case in practice, this implies that to implement Ω and solve consensus it is sufficient to have just one eventually timely link -- all the other links in the system, Θ(n2) of them, may be asynchronous. There is no need to know which link p → q is eventually timely, when it becomes timely, or what is its bound on message delay. Surprisingly, it is not even required that the source p or destination q of this link be correct: either p or q may actually crash, in which case the link p → q is eventually timely in a trivial way, and it is useless for sending messages. We show that these results are in a sense optimal: even if every process has f - 1 eventually timely links, neither Ω nor consensus can be solved. We also give an algorithm that implements Ω in systems where some correct process has f outgoing links that are eventually timely, such that eventually only f links carry messages, and we show that this is optimal. For f = 1 , this algorithm ensures that all the links, except for one, eventually become quiescent.
Bandwidth-efficient management of DHT routing tables Today an application developer using a distributed hash table (DHT) with n nodes must choose a DHT protocol from the spectrum between O(1) lookup protocols [9, 18] and O(log n) protocols [20-23, 25, 26]. O(1) protocols achieve low latency lookups on small or low-churn networks because lookups take only a few hops, but incur high maintenance traffic on large or high-churn networks. O(log n) protocols incur less maintenance traffic on large or high-churn networks but require more lookup hops in small networks. Accordion is a new routing protocol that does not force the developer to make this choice: Accordion adjusts itself to provide the best performance across a range of network sizes and churn rates while staying within a bounded bandwidth budget. The key challenges in the design of Accordion are the algorithms that choose the routing table's size and content. Each Accordion node learns of new neighbors opportunistically, in a way that causes the density of its neighbors to be inversely proportional to their distance in ID space from the node. This distribution allows Accordion to vary the table size along a continuum while still guaranteeing at most O(log n) lookup hops. The user-specified bandwidth budget controls the rate at which a node learns about new neighbors. Each node limits its routing table size by evicting neighbors that it judges likely to have failed. High churn (i.e., short node lifetimes) leads to a high eviction rate. The equilibrium between the learning and eviction processes determines the table size. Simulations show that Accordion maintains an efficient lookup latency versus bandwidth tradeoff over a wider range of operating conditions than existing DHTs.
Design Aspects of an Active Electromagnetic Suspension System for Automotive Applications. This paper is concerned with the design aspects of an active electromagnet suspension system for automotive appli- cations which combines a brushless tubular permanent magnet actuator (TPMA) with a passive spring. This system provides for additional stability and safety by performing active roll and pitch control during cornering and braking. Furthermore, elimination of the road irregularities is possible, hence passenger drive comfort is increased. Based upon measurements, static and dynamic specifications of the actuator are derived. The electro magnetic suspension is installed on a quarter car test setup, and the improved performance using roll control is measured and compared to a commercial passive system. An alternative design using a slotless external magnet tubular actuator is proposed which fulfills the derived performance, thermal and volume specifications.
Quadrature Bandpass Sampling Rules for Single- and Multiband Communications and Satellite Navigation Receivers In this paper, we examine how existing rules for bandpass sampling rates can be applied to quadrature bandpass sampling. We find that there are significantly more allowable sampling rates and that the minimum rate can be reduced.
A VCO-Based Nonuniform Sampling ADC Using a Slope-Dependent Pulse Generator This paper presents a voltage-controlled oscillator (VCO)-based nonuniform sampling analog-to-digital converter (ADC) as an alternative to the level-crossing (LC)-based converters for digitizing biopotential signals. This work aims to provide a good signal-to-noise-and-distortion ratio at a low average sampling rate. In the proposed conversion method, a slope-dependent pulse generation block is used to provide a variable sample rate adjusted according to the input signal's slope. Simulation results show that the introduced method meets a target reconstruction quality with a sampling rate approaching 92 Sps, while on the same MIT-BIH Arrhythmia N 106 ECG benchmark, the classic LC-based approach requires a sampling rate higher than 500 Sps. The benefits of the proposed method are more remarkable when the input signal is very noisy. The proposed ADC achieves a compression ratio close to 4, but with only 5.4% root-mean-square difference when tested using the MIT-BIH Arrhythmia Database.
1.112
0.1
0.1
0.1
0.1
0.05
0.0048
0
0
0
0
0
0
0
Event-Based Leader-following Consensus of Multi-Agent Systems with Input Time Delay The event-based control strategy is an effective methodology for tackling the distributed control of multi-agent systems with limited on-board resources. This technical note focuses on event-based leader-following consensus for multi-agent systems described by general linear models and subject to input time delay between controller and actuator. For each agent, the controller updates are event-based and only triggered at its own event times. A necessary condition and two sufficient conditions on leader-following consensus are presented, respectively. It is shown that continuous communication between neighboring agents can be avoided and the Zeno-behavior of triggering time sequences is excluded. A numerical example is presented to illustrate the effectiveness of the obtained theoretical results.
Multiobjective evolutionary algorithms: A survey of the state of the art A multiobjective optimization problem involves several conflicting objectives and has a set of Pareto optimal solutions. By evolving a population of solutions, multiobjective evolutionary algorithms (MOEAs) are able to approximate the Pareto optimal set in a single run. MOEAs have attracted a lot of research effort during the last 20 years, and they are still one of the hottest research areas in the field of evolutionary computation. This paper surveys the development of MOEAs primarily during the last eight years. It covers algorithmic frameworks such as decomposition-based MOEAs (MOEA/Ds), memetic MOEAs, coevolutionary MOEAs, selection and offspring reproduction operators, MOEAs with specific search methods, MOEAs for multimodal problems, constraint handling and MOEAs, computationally expensive multiobjective optimization problems (MOPs), dynamic MOPs, noisy MOPs, combinatorial and discrete MOPs, benchmark problems, performance indicators, and applications. In addition, some future research issues are also presented.
Optimal Tracking Control of Motion Systems Tracking control of motion systems typically requires accurate nonlinear friction models, especially at low speeds, and integral action. However, building accurate nonlinear friction models is time consuming, friction characteristics dramatically change over time, and special care must be taken to avoid windup in a controller employing integral action. In this paper a new approach is proposed for the optimal tracking control of motion systems with significant disturbances, parameter variations, and unmodeled dynamics. The ‘desired’ control signal that will keep the nominal system on the desired trajectory is calculated based on the known system dynamics and is utilized in a performance index to design an optimal controller. However, in the presence of disturbances, parameter variations, and unmodeled dynamics, the desired control signal must be adjusted. This is accomplished by using neural network based observers to identify these quantities, and update the control signal on-line. This formulation allows for excellent motion tracking without the need for the addition of an integral state. The system stability is analyzed and Lyapunov based weight update rules are applied to the neural networks to guarantee the boundedness of the tracking error, disturbance estimation error, and neural network weight errors. Experiments are conducted on the linear axes of a mini CNC machine for the contour control of two orthogonal axes, and the results demonstrate the excellent performance of the proposed methodology.
Adaptive tracking control of leader-follower systems with unknown dynamics and partial measurements. In this paper, a decentralized adaptive tracking control is developed for a second-order leader–follower system with unknown dynamics and relative position measurements. Linearly parameterized models are used to describe the unknown dynamics of a self-active leader and all followers. A new distributed system is obtained by using the relative position and velocity measurements as the state variables. By only using the relative position measurements, a dynamic output–feedback tracking control together with decentralized adaptive laws is designed for each follower. At the same time, the stability of the tracking error system and the parameter convergence are analyzed with the help of a common Lyapunov function method. Some simulation results are presented to validate the proposed adaptive tracking control.
Plug-and-Play Decentralized Model Predictive Control for Linear Systems In this technical note, we consider a linear system structured into physically coupled subsystems and propose a decentralized control scheme capable to guarantee asymptotic stability and satisfaction of constraints on system inputs and states. The design procedure is totally decentralized, since the synthesis of a local controller uses only information on a subsystem and its neighbors, i.e. subsystems coupled to it. We show how to automatize the design of local controllers so that it can be carried out in parallel by smart actuators equipped with computational resources and capable to exchange information with neighboring subsystems. In particular, local controllers exploit tube-based Model Predictive Control (MPC) in order to guarantee robustness with respect to physical coupling among subsystems. Finally, an application of the proposed control design procedure to frequency control in power networks is presented.
Vibration Control With MEMS Electrostatic Drives: A Self-Sensing Approach Nanopositioning is the actuation and sensing of motion on the nanometer scale and recent nanopositioner designs have been utilizing microelectromechanical systems (MEMS). This brief demonstrates a simple method to implement vibration control on a MEMS nanopositioner. The actuation and sensing of the system are performed with a MEMS electrostatic drive. The electrostatic drive is arranged to be self-sensing, that is, the drive’s voltage is used to actuate the system and the drive’s current is used to observe the system. With this arrangement, the current is proportional to velocity at the resonance frequency and velocity feedback is used to damp the nanopositioner. To filter the current signal and recover a displacement signal, a charge measurement may be preferred to a current measurement. The self-sensing arrangement was modified to be a charge sensor and resonant control was applied to damp the nanopositioner. With this arrangement, the gain at the resonance frequency was attenuated by 18.45 dB.
Adaptive Cooperative Output Regulation for a Class of Nonlinear Multi-Agent Systems In this technical note, an adaptive cooperative output regulation problem for a class of nonlinear multi-agent systems is considered. The cooperative output regulation problem is first converted into an adaptive stabilization problem for an augmented multi-agent system. A distributed adaptive control law with adoption of Nussbaum gain technique is then proposed to globally stabilize this augmented system. This control scheme is designed such that, in the presence of unknown control direction and large parameter variations in each agent, the closed-loop system maintains global stability and the output of each agent tracks a class of prescribed signals asymptotically.
Self-constructing wavelet neural network algorithm for nonlinear control of large structures An adaptive control algorithm is presented for nonlinear vibration control of large structures subjected to dynamic loading. It is based on integration of a self-constructing wavelet neural network (SCWNN) developed specifically for structural system identification with an adaptive fuzzy sliding mode control approach. The algorithm is particularly suitable when the physical properties such as the stiffnesses and damping ratios of the structural system are unknown or partially known which is the case when a structure is subjected to an extreme dynamic event such as an earthquake as the structural properties change during the event. SCWNN is developed for functional approximation of the nonlinear behavior of large structures using neural networks and wavelets. In contrast to earlier work, the identification and control are processed simultaneously which makes the resulting adaptive control more applicable to real life situations. A two-part growing and pruning criterion is developed to construct the hidden layer in the neural network automatically. A fuzzy compensation controller is developed to reduce the chattering phenomenon. The robustness of the proposed algorithm is achieved by deriving a set of adaptive laws for determining the unknown parameters of wavelet neural networks using two Lyapunov functions. No offline training of neural network is necessary for the system identification process. In addition, the earthquake signals are considered as unidentified. This is particularly important for on-line vibration control of large civil structures since the external dynamic loading due to earthquake is not available in advance. The model is applied to vibration control of a continuous cast-in-place prestressed concrete box-girder bridge benchmark problem seismically excited highway.
Gradient-Based Learning Applied to Document Recognition Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper rev...
Local and global properties in networks of processors (Extended Abstract) This paper attempts to get at some of the fundamental properties of distributed computing by means of the following question: “How much does each processor in a network of processors need to know about its own identity, the identities of other processors, and the underlying connection network in order for the network to be able to carry out useful functions?” The approach we take is to require that the processors be designed without any knowledge (or only very broad knowledge) of the networks they are to be used in, and furthermore, that all processors with the same number of communication ports be identical. Given a particular network function, e.g., setting up a spanning tree, we ask whether processors may be designed so that when they are embedded in any connected network and started in some initial configuration, they are guaranteed to accomplish the desired function.
Mdvm System Concept, Paging Latency And Round-2 Randomized Leader Election Algorithm In Sg The future trend in the computing paradigm is marked by mobile computing based on mobile-client/server architecture connected by wireless communication network. However, the mobile computing systems have limitations because of the resource-thin mobile clients operating on battery power. The MDVM system allows the mobile clients to utilize memory and CPU resources of Server-Groups (SG) to overcome the resource limitations of clients in order to support the high-end mobile applications such as, m-commerce and virtual organization (VO). In this paper the concept ofMDVM system and the architecture of cellular network containing the SG are discussed. A round-2 randomized distributed algorithm is proposed to elect a unique leader and co-leader of the SG. The algorithm is free from any assumption about network topology, buffer space limitations and is based on dynamically elected coordinators eliminating single point of failure. The algorithm is implemented in distributed system setup and the network-paging latency values of wired and wireless networks are measured experimentally. The experimental results demonstrate that in most cases the algorithm successfully terminates in first round and the possibility of second round execution decreases significantly with the increase in the size of SG (vertical bar N-a vertical bar). The overall message complexity of the algorithm is O(vertical bar N-a vertical bar). The comparative study of network-paging latencies indicates that 3G/4G mobile communication systems would support the realization of MDVM system.
Sequential approximation of feasible parameter sets for identification with set membership uncertainty In this paper the problem of approximating the feasible parameter set for identification of a system in a set membership setting is considered. The system model is linear in the unknown parameters. A recursive procedure providing an approximation of the parameter set of interest through parallelotopes is presented, and an efficient algorithm is proposed. Its computational complexity is similar to that of the commonly used ellipsoidal approximation schemes. Numerical results are also reported on some simulation experiments conducted to assess the performance of the proposed algorithm.
A 10-Bit 800-MHz 19-mW CMOS ADC A pipelined ADC employs charge-steering op amps to relax the trade-offs among speed, noise, and power consumption. Applying full-rate nonlinearity and gain error calibration, a prototype realized in 65-nm CMOS technology achieves an SNDR of 52.2 dB at an input frequency of 399.2MHz and an FoM of 53 fJ/conversion-step.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.213333
0.213333
0.213333
0.213333
0.213333
0.213333
0.213333
0.06
0
0
0
0
0
0
A Smooth Double Proximal Primal-Dual Algorithm for a Class of Distributed Nonsmooth Optimization Problems This technical note studies a class of distributed nonsmooth convex consensus optimization problems. The cost function is a summation of local cost functions which are convex but nonsmooth. Each of the local cost functions consists of a twice differentiable (smooth) convex function and two lower semi-continuous (nonsmooth) convex functions. We call these problems as <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">single-smooth plus double-nonsmooth</italic> (SSDN) problems. Under mild conditions, we propose a distributed double proximal primal-dual optimization algorithm. Double proximal splitting is designed to deal with the difficulty caused by the unproximable property of the summation of those two nonsmooth functions. Besides, it can also guarantee that the proposed algorithm is locally Lipschitz continuous. An auxiliary variable in the double proximal splitting is introduced to estimate the subgradient of the second nonsmooth function. Theoretically, we conduct the convergence analysis by employing Lyapunov stability theory. It shows that the proposed algorithm can make the states achieve consensus at the optimal point. In the end, nontrivial simulations are presented and the results demonstrate the effectiveness of the proposed algorithm.
A Distributed and Scalable Processing Method Based Upon ADMM. The alternating direction multiplier method (ADMM) was originally devised as an iterative method for solving convex minimization problems by means of parallelization, and was recently used for distributed processing. This letter proposes a modification of state-of-the-art ADMM formulations in order to obtain a scalable version, well suited for a wide range of applications such as cooperative local...
Distributed Random Convex Programming via Constraints Consensus. This paper discusses distributed approaches for the solution of random convex programs (RCPs). RCPs are convex optimization problems with a (usually large) number N of randomly extracted constraints; they arise in several application areas, especially in the context of decision-making under uncertainty; see [G. C. Calafiore, SIAM J. Optim., 20 (2010), pp. 3427-3464; G. C. Calafiore and M. C. Campi, IEEE Trans. Automat. Control, 51 (2006), pp. 742-753]. We here consider a setup in which instances of the random constraints (the scenario) are not held by a single centralized processing unit, but are instead distributed among different nodes of a network. Each node "sees" only a small subset of the constraints, and may communicate with neighbors. The objective is to make all nodes converge to the same solution as the centralized RCP problem. To this end, we develop two distributed algorithms that are variants of the constraints consensus algorithm [G. Notarstefano and F. Bullo, Proceedings of the 46th IEEE Conference on Decision and Control, New Orleans, LA, 2007, pp. 927-932; G. Notarstefano and F. Bullo, IEEE Trans. Automat. Control, 56 (2011), pp. 2247-2261]: the active constraints consensus algorithm, and the vertex constraints consensus (VCC) algorithm. We show that the active constraints consensus algorithm computes the overall optimal solution in finite time, and with almost surely bounded communication at each iteration of the algorithm. The VCC algorithm is instead tailored for the special case in which the constraint functions are convex also with respect to the uncertain parameters, and it computes the solution in a number of iterations bounded by the diameter of the communication graph. We further devise a variant of the VCC algorithm, namely quantized vertex constraints consensus (qVCC), to cope with the case in which communication bandwidth among processors is bounded. We discuss several applications of the proposed distributed techniques, including estimation, classification, and random model predictive control, and we present a numerical analysis of the performance of the proposed methods. As a complementary numerical result, we show that the parallel computation of the scenario solution using the active constraints consensus algorithm significantly outperforms its centralized equivalent.
Distributed Linearized Alternating Direction Method of Multipliers for Composite Convex Consensus Optimization Given an undirected graph G = (N, E) of agents N = {1,..., N} connected with edges in E, we study how to compute an optimal decision on which there is consensus among agents and that minimizes the sum of agent-specific private convex composite functions { i } iN , where i i + f i belongs to agent-i. Assuming only agents connected by an edge can communicate, we propose a distributed proximal gradient algorithm (DPGA) for consensus optimization over both unweighted and weighted static (undirected) communication networks. In one iteration, each agent-i computes the prox map of i and gradient of f i , and this is followed by local communication with neighboring agents. We also study its stochastic gradient variant, SDPGA, which can only access to noisy estimates of f i at each agent-i. This computational model abstracts a number of applications in distributed sensing, machine learning and statistical inference. We show ergodic convergence in both suboptimality error and consensus violation for the DPGA and SDPGA with rates O(1/t) and O(1/t), respectively.
A New Randomized Block-Coordinate Primal-Dual Proximal Algorithm for Distributed Optimization This paper proposes Triangularly Preconditioned Primal- Dual algorithm, a new primal-dual algorithm for minimizing the sum of a Lipschitz-differentiable convex function and two possibly nonsmooth convex functions, one of which is composed with a linear mapping. We devise a randomized block-coordinate (BC) version of the algorithm which converges under the same stepsize conditions as the full algor...
Adaptive event-based tracking control of unmanned marine vehicle systems with DoS attack The tracking control problem of a network-based unmanned marine vehicle (UMV) system is investigated in this paper. The whole system consist of an UMV, a communication network and a ground-based control station. The measurement data collected by sensor is transmitted to the control station through communication network, and an adaptive event-triggering mechanism is introduced in this paper, which can greatly reduce the communication burden. Moreover, the aperiodic DoS attack is considered that occurs in the channel between the control station and the actuator, and no data can be transmitted when the attack occurs. The main goal is to design a Dynamic Output Feedback Control (DOFC) algorithm to track the given yaw velocity in presence of event-triggering mechanism and DoS attack. By constructing a specific Lyapunov function, the sufficient condition of global exponential stability of system with an expected H∞ disturbance attenuation index is given, and the controller gains can be obtained by solving some inequalities. Finally, the effectiveness of proposed control scheme is demonstrated by a simulation study on the networked UMV system.
The part-time parliament Recent archaeological discoveries on the island of Paxos reveal that the parliament functioned despite the peripatetic propensity of its part-time legislators. The legislators maintained consistent copies of the parliamentary record, despite their frequent forays from the chamber and the forgetfulness of their messengers. The Paxon parliament's protocol provides a new way of implementing the state machine approach to the design of distributed systems.
Design Techniques for Fully Integrated Switched-Capacitor DC-DC Converters. This paper describes design techniques to maximize the efficiency and power density of fully integrated switched-capacitor (SC) DC-DC converters. Circuit design methods are proposed to enable simplified gate drivers while supporting multiple topologies (and hence output voltages). These methods are verified by a proof-of-concept converter prototype implemented in 0.374 mm2 of a 32 nm SOI process. ...
Distributed reset A reset subsystem is designed that can be embedded in an arbitrary distributed system in order to allow the system processes to reset the system when necessary. Our design is layered, and comprises three main components: a leader election, a spanning tree construction, and a diffusing computation. Each of these components is self-stabilizing in the following sense: if the coordination between the up-processes in the system is ever lost (due to failures or repairs of processes and channels), then each component eventually reaches a state where coordination is regained. This capability makes our reset subsystem very robust: it can tolerate fail-stop failures and repairs of processes and channels, even when a reset is in progress
Winnowing: local algorithms for document fingerprinting Digital content is for copying: quotation, revision, plagiarism, and file sharing all create copies. Document fingerprinting is concerned with accurately identifying copying, including small partial copies, within large sets of documents.We introduce the class of local document fingerprinting algorithms, which seems to capture an essential property of any finger-printing technique guaranteed to detect copies. We prove a novel lower bound on the performance of any local algorithm. We also develop winnowing, an efficient local fingerprinting algorithm, and show that winnowing's performance is within 33% of the lower bound. Finally, we also give experimental results on Web data, and report experience with MOSS, a widely-used plagiarism detection service.
Cache Games -- Bringing Access-Based Cache Attacks on AES to Practice Side channel attacks on cryptographic systems exploit information gained from physical implementations rather than theoretical weaknesses of a scheme. In recent years, major achievements were made for the class of so called access-driven cache attacks. Such attacks exploit the leakage of the memory locations accessed by a victim process. In this paper we consider the AES block cipher and present an attack which is capable of recovering the full secret key in almost real time for AES-128, requiring only a very limited number of observed encryptions. Unlike previous attacks, we do not require any information about the plaintext (such as its distribution, etc.). Moreover, for the first time, we also show how the plaintext can be recovered without having access to the cipher text at all. It is the first working attack on AES implementations using compressed tables. There, no efficient techniques to identify the beginning of AES rounds is known, which is the fundamental assumption underlying previous attacks. We have a fully working implementation of our attack which is able to recover AES keys after observing as little as 100 encryptions. It works against the OpenS SL 0.9.8n implementation of AES on Linux systems. Our spy process does not require any special privileges beyond those of a standard Linux user. A contribution of probably independent interest is a denial of service attack on the task scheduler of current Linux systems (CFS), which allows one to observe (on average) every single memory access of a victim process.
The analysis and improvement of a current-steering DACs dynamic SFDR-I: the cell-dependent delay differences For a high-accuracy current-steering digital-to-analog converters (DACs), the delay differences between the current sources is one of the major reasons that cause bad dynamic performance. In this paper, a mathematical model describing the impact of the delay differences on the DACs SFDR property is presented. The results are verified by comparison to behavioral-level simulations and to actual measurement data from published papers. Based on this analysis, the delay differences cancellation (DDC) technique to reduce the impact of the delay differences on the SFDR property is proposed and verified by simulation results.
An energy-efficient VLSI architecture for pattern recognition via deep embedding of computation in SRAM In this paper, we propose the concept of compute memory, where computation is deeply embedded into the memory (SRAM). This deep embedding enables multi-row read access and analog signal processing. Compute memory exploits the relaxed precision and linearity requirements of pattern recognition applications. System-level simulations incorporating various deterministic errors from analog signal chain demonstrates the limited accuracy of analog processing does not significantly degrade the system performance, which means the probability of pattern detection is minimally impacted. The estimated energy saving is 63 % as compared to the conventional system with standard embedded memory and parallel processing architecture, for 256×256 target image.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
A 60 W 60 nV/ Hz Readout Front-End for Portable Biopotential Acquisition Systems There is a growing demand for low-power, small-size and ambulatory biopotential acquisition systems. A crucial and important block of this acquisition system is the analog readout front-end. We have implemented a low-power and low-noise readout front-end with configurable characteristics for Electroencephalogram (EEG), Electrocardiogram (ECG), and Electromyogram (EMG) signals. Key to its performance is the new AC-coupled chopped instrumentation amplifier (ACCIA), which uses a low power current feedback instrumentation amplifier (IA). Thus, while chopping filters the 1/f noise of CMOS transistors and increases the CMRR, AC coupling is capable of rejecting differential electrode offset (DEO) up to plusmn50 mV from conventional Ag/AgCl electrodes. The ACCIA achieves 120 dB CMRR and 57 nV/radicHz input-referred voltage noise density, while consuming 11.1 muA from a 3 V supply. The chopping spike filter (CSF) stage filters the chopping spikes generated by the input chopper of ACCIA and the digitally controllable variable gain stage is used to set the gain and the bandwidth of the front-end. The front-end is implemented in a 0.5 mum CMOS process. Total current consumption is 20 muA from 3V
A 200 <formula formulatype="inline"><tex Notation="TeX">$\mu$</tex> </formula>W Eight-Channel EEG Acquisition ASIC for Ambulatory EEG Systems The growing interest toward the improvement of patients&#39; quality of life and the use of medical signals in nonmedical applications such as entertainment, sports, and brain-computerinterfaces, requires the implementation of miniaturized and wireless biopotential acquisition systems with ultralow power dissipation. Therefore, this paper presents the implementation of a complete EEG acquisition ASIC ...
A 13 μA analog signal processing IC for accurate recognition of multiple intra-cardiac signals. A low-power analog signal processing IC is presented for the low-power heart rhythm analysis. The ASIC features 3 identical, but independent intra-ECG readout channels each equipping an analog QRS feature extractor for low-power consumption and fast diagnosis of the fatal case. A 16-level digitized sine-wave synthesizer together with a synchronous readout circuit can measure bio-impedance in the r...
A Low-Noise Area-Efficient Chopped VCO-Based CTDSM for Sensor Applications in 40-nm CMOS. An area-efficient voltage-sensing readout circuit employing chopped voltage-controlled oscillator (VCO)-based continuous-time delta-sigma modulator (CTDSM) is presented in this paper. This VCO-based CTDSM features direct connection to sensors to eliminate pre-amplifier for achieving better hardware efficiency. The VCO is designed as a trans-conductor current-controlled oscillator, which is a fully...
A 16-Channel Patient-Specific Seizure Onset and Termination Detection SoC With Impedance-Adaptive Transcranial Electrical Stimulator A 16-channel noninvasive closed-loop beginning-and end-of-seizure detection SoC is presented. The dual-channel charge recycled (DCCR) analog front end (AFE) achieves chopping and time-multiplexing an amplifier between two channels simultaneously which exploits fast-settling DC servo-loop with current consumption and NEF of 0.9 μA/channel and 3.29/channel, respectively. The dual-detector architectu...
A Trimodal Wireless Implantable Neural Interface System-on-Chip A wireless and battery-less trimodal neural interface system-on-chip (SoC), capable of 16-ch neural recording, 8-ch electrical stimulation, and 16-ch optical stimulation, all integrated on a 5 × 3 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> chip fabricated in 0.35-μm standard CMOS process. The trimodal SoC is designed to be inductively powered and communicated. The downlink data telemetry utilizes on-off keying pulse-position modulation (OOK-PPM) of the power carrier to deliver configuration and control commands at 50 kbps. The analog front-end (AFE) provides adjustable mid-band gain of 55-70 dB, low/high cut-off frequencies of 1-100 Hz/10 kHz, and input-referred noise of 3.46 μV <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">rms</sub> within 1 Hz-50 kHz band. AFE outputs of every two-channel are digitized by a 50 kS/s 10-bit SAR-ADC, and multiplexed together to form a 6.78 Mbps data stream to be sent out by OOK modulating a 434 MHz RF carrier through a power amplifier (PA) and 6 cm monopole antenna, which form the uplink data telemetry. Optical stimulation has a switched-capacitor based stimulation (SCS) architecture, which can sequentially charge four storage capacitor banks up to 4 V and discharge them in selected μLEDs at instantaneous current levels of up to 24.8 mA on demand. Electrical stimulation is supported by four independently driven stimulating sites at 5-bit controllable current levels in ±(25-775) μA range, while active/passive charge balancing circuits ensure safety. In vivo testing was conducted on four anesthetized rats to verify the functionality of the trimodal SoC.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Measuring the Gap Between FPGAs and ASICs ABSTRACT This paper presents experimental measurements of the differences between a 90nm CMOS FPGA and 90nm CMOS Standard Cell ASICs in terms of logic density, circuit speed and power consumption. We are motivated to make these measurements to enable system designers to make better informed choices between these two media and to give insight to FPGA makers on the deciencies to attack and thereby improve FPGAs. In the paper, we describe the methodology by which the measurements were obtained and we show that, for circuits containing only combinational logic and,ipops, the ratio of silicon area required to implement them in FPGAs and ASICs is on average 40. Modern FPGAs also contain \hard" blocks such as multiplier/accumulators and block memories,and we nd,that these blocks reduce this average area gap signican tly to as little as 21. The ratio of critical path delay, from FPGA to ASIC, is roughly 3 to 4, with less inuence from block memory and hard multipliers. The dynamic power consumption ratio is approximately 12 times and, with hard blocks, this gap generally becomes smaller. Categories and Subject Descriptors
Termination detection for diffusing computations
Distributed multi-agent optimization with state-dependent communication We study distributed algorithms for solving global optimization problems in which the objective function is the sum of local objective functions of agents and the constraint set is given by the intersection of local constraint sets of agents. We assume that each agent knows only his own local objective function and constraint set, and exchanges information with the other agents over a randomly varying network topology to update his information state. We assume a state-dependent communication model over this topology: communication is Markovian with respect to the states of the agents and the probability with which the links are available depends on the states of the agents. We study a projected multi-agent subgradient algorithm under state-dependent communication. The state-dependence of the communication introduces significant challenges and couples the study of information exchange with the analysis of subgradient steps and projection errors. We first show that the multi-agent subgradient algorithm when used with a constant stepsize may result in the agent estimates to diverge with probability one. Under some assumptions on the stepsize sequence, we provide convergence rate bounds on a “disagreement metric” between the agent estimates. Our bounds are time-nonhomogeneous in the sense that they depend on the initial starting time. Despite this, we show that agent estimates reach an almost sure consensus and converge to the same optimal solution of the global optimization problem with probability one under different assumptions on the local constraint sets and the stepsize sequence.
Yet another MicroArchitectural Attack:: exploiting I-Cache MicroArchitectural Attacks (MA), which can be considered as a special form of Side-Channel Analysis, exploit microarchitectural functionalities of processor implementations and can compromise the security of computational environments even in the presence of sophisticated protection mechanisms like virtualization and sandboxing. This newly evolving research area has attracted significant interest due to the broad application range and the potentials of these attacks. Cache Analysis and Branch Prediction Analysis were the only types of MA that had been known publicly. In this paper, we introduce Instruction Cache (I-Cache) as yet another source of MA and present our experimental results which clearly prove the practicality and danger of I-Cache Attacks.
A decentralized modular control framework for robust control of FES-activated walker-assisted paraplegic walking using terminal sliding mode and fuzzy logic control. A major challenge to developing functional electrical stimulation (FES) systems for paraplegic walking and widespread acceptance of these systems is the design of a robust control strategy that provides satisfactory tracking performance. The systems need to be robust against time-varying properties of neuromusculoskeletal dynamics, day-to-day variations, subject-to-subject variations, external dis...
NDC: Analyzing the impact of 3D-stacked memory+logic devices on MapReduce workloads While Processing-in-Memory has been investigated for decades, it has not been embraced commercially. A number of emerging technologies have renewed interest in this topic. In particular, the emergence of 3D stacking and the imminent release of Micron's Hybrid Memory Cube device have made it more practical to move computation near memory. However, the literature is missing a detailed analysis of a killer application that can leverage a Near Data Computing (NDC) architecture. This paper focuses on in-memory MapReduce workloads that are commercially important and are especially suitable for NDC because of their embarrassing parallelism and largely localized memory accesses. The NDC architecture incorporates several simple processing cores on a separate, non-memory die in a 3D-stacked memory package; these cores can perform Map operations with efficient memory access and without hitting the bandwidth wall. This paper describes and evaluates a number of key elements necessary in realizing efficient NDC operation: (i) low-EPI cores, (ii) long daisy chains of memory devices, (iii) the dynamic activation of cores and SerDes links. Compared to a baseline that is heavily optimized for MapReduce execution, the NDC design yields up to 15X reduction in execution time and 18X reduction in system energy.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.2
0.2
0.2
0.2
0.1
0.05
0
0
0
0
0
0
0
0
Enhancing GNU Radio with Heterogeneous Computing Software radio system performance can be significantly enhanced with appropriate architectural choices, such as the GNU Radio USRP's division of roles between an FPGA and a general purpose CPU. Heterogeneous architectures provide the most flexibility for achieving high computational performance; yet have not been fully exploited by today's software radio architectures. This paper reports on a GNU Radio implementation for the Texas Instruments TCI663X SOC DSP/ARM that was publically released in 2014 and has seen widespread use in the telecommunications industry.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Mechanism Design-Based Secure Leader Election Model for Intrusion Detection in MANET In this paper, we study leader election in the presence of selfish nodes for intrusion detection in mobile ad hoc networks (MANETs). To balance the resource consumption among all nodes and prolong the lifetime of an MANET, nodes with the most remaining resources should be elected as the leaders. However, there are two main obstacles in achieving this goal. First, without incentives for serving others, a node might behave selfishly by lying about its remaining resources and avoiding being elected. Second, electing an optimal collection of leaders to minimize the overall resource consumption may incur a prohibitive performance overhead, if such an election requires flooding the network. To address the issue of selfish nodes, we present a solution based on mechanism design theory. More specifically, the solution provides nodes with incentives in the form of reputations to encourage nodes in honestly participating in the election process. The amount of incentives is based on the Vickrey, Clarke, and Groves (VCG) model to ensure truth-telling to be the dominant strategy for any node. To address the optimal election issue, we propose a series of local election algorithms that can lead to globally optimal election results with a low cost. We address these issues in two possible application settings, namely, Cluster-Dependent Leader Election (CDLE) and Cluster-Independent Leader Election (CILE). The former assumes given clusters of nodes, whereas the latter does not require any preclustering. Finally, we justify the effectiveness of the proposed schemes through extensive experiments.
The Eventual Leadership in Dynamic Mobile Networking Environments Eventual leadership has been identified as a basic building block to solve synchronization or coordination problems in distributed computing systems. However, it is a challenging task to implement the eventual leadership facility, especially in dynamic distributed systems, where the global system structure is unknown to the processes and can vary over time. This paper studies the implementation of a leadership facility in infrastructured mobile networks, where an unbounded set of mobile hosts arbitrarily move in the area covered by fixed mobile support stations. Mobile hosts can crash and suffer from disconnections. We develop an eventual leadership protocol based on a time-free approach. The mobile support stations exchange queries and responses on behalf of mobile hosts. With assumptions on the message exchange flow, a correct mobile host is eventually elected as the unique leader. Since no time property is assumed on the communication channels, the proposed protocol is especially effective and efficient in mobile environments, where time-based properties are difficult to satisfy due to the dynamics of the network.
Top K-leader election in mobile ad hoc networks Many applications in mobile ad hoc networks (MANETs) require multiple nodes to act as leaders. Given the resource constraints of mobile nodes, it is desirable to elect resource-rich nodes with higher energy or computational capabilities as leaders. In this paper, we propose a novel distributed algorithm to elect top-K weighted leaders in MANETs where weight indicates available node resources. Frequent topology changes, limited energy supplies, and variable message delays in MANETs make the weight-based K leader election a non-trivial task. So far, there is no algorithm for weight-based K leader election in distributed or mobile environments. Moreover, existing single leader election algorithms for ad hoc networks are either unsuitable of extending to elect weight-based K leaders or they perform poorly under dynamic network conditions. In our proposed algorithm, initially few coordinator nodes are selected locally which collect the weights of other nodes using the diffusing computation approach. The coordinator nodes then collaborate together, so that, finally the highest weight coordinator collects weights of all the nodes in the network. Besides simulation we have also implemented our algorithm on a testbed and conducted experiments. The results prove that our proposed algorithm is scalable, reliable, message-efficient, and can handle dynamic topological changes in an efficient manner.
An efficient leader election protocol for mobile networks In this paper, we present a leader election protocol that works under frequent network changes and node mobility. Our proposed protocol, which operates well in ad hoc networks, is based on electing a unique node that outperforms all the other nodes in a cluster identified by our protocol. We discuss our protocol and present an illustrative example to show how our proposed scheme works in a mobile network. We also show how our algorithm succeeds in electing a unique leader in a mobile ad hoc network environment.
An asynchronous leader election algorithm for dynamic networks An algorithm for electing a leader in an asynchronous network with dynamically changing communication topology is presented. The algorithm ensures that, no matter what pattern of topology changes occur, if topology changes cease, then eventually every connected component contains a unique leader. The algorithm combines ideas from the Temporally Ordered Routing Algorithm (TORA) for mobile ad hoc networks [16] with a wave algorithm [21], all within the framework of a height-based mechanism for reversing the logical direction of communication links [6]. It is proved that in certain well-behaved situations, a new leader is not elected unnecessarily.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
A Fast and High Quality Multilevel Scheme for Partitioning Irregular Graphs Recently, a number of researchers have investigated a class of graph partitioning algorithms that reduce the size of the graph by collapsing vertices and edges, partition the smaller graph, and then uncoarsen it to construct a partition for the original graph (Bui and Jones, Proc. of the 6th SIAM Conference on Parallel Processing for Scientific Computing, 1993, 445-452; Hen- drickson and Leland, A Multilevel Algorithm for Partitioning Graphs, Tech. report SAND 93-1301, Sandia National Laboratories, Albuquerque, NM, 1993). From the early work it was clear that multilevel techniques held great promise; however, it was not known if they can be made to con- sistently produce high quality partitions for graphs arising in a wide range of application domains. We investigate the effectiveness of many different choices for all three phases: coarsening, partition of the coarsest graph, and refinement. In particular, we present a new coarsening heuristic (called heavy-edge heuristic) for which the size of the partition of the coarse graph is within a small factor of the size of the final partition obtained after multilevel refinement. We also present a much faster variation of the Kernighan-Lin (KL) algorithm for refining during uncoarsening. We test our scheme on a large number of graphs arising in various domains including finite element methods, linear pro- gramming, VLSI, and transportation. Our experiments show that our scheme produces partitions that are consistently better than those produced by spectral partitioning schemes in substantially smaller time. Also, when our scheme is used to compute fill-reducing orderings for sparse matrices, it produces orderings that have substantially smaller fill than the widely used multiple minimum degree algorithm.
Controllability and observability of Boolean control networks The controllability and observability of Boolean control networks are investigated. After a brief review on converting a logic dynamics to a discrete-time linear dynamics with a transition matrix, some formulas are obtained for retrieving network and its logical dynamic equations from this network transition matrix. Based on the discrete-time dynamics, the controllability via two kinds of inputs is revealed by providing the corresponding reachable sets precisely. Then the problem of observability is also solved by giving necessary and sufficient conditions.
The M-Machine multicomputer The M-Machine is an experimental multicomputer being developed to test architectural concepts motivated by the constraints of modern semiconductor technology and the demands of programming systems. The M-Machine computing nodes are connected with a 3-D mesh network; each node is a multithreaded processor incorporating 9 function units, on-chip cache, and local memory. The multiple function units are used to exploit both instruction-level and thread-level parallelism. A user accessible message passing system yields fast communication and synchronization between nodes. Rapid access to remote memory is provided transparently to the user with a combination of hardware and software mechanisms. This paper presents the architecture of the M-Machine and describes how its mechanisms attempt to maximize both single thread performance and overall system throughput. The architecture is complete and the MAP chip, which will serve as the M-Machine processing node, is currently being implemented.
Halide: a language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines Image processing pipelines combine the challenges of stencil computations and stream programs. They are composed of large graphs of different stencil stages, as well as complex reductions, and stages with global or data-dependent access patterns. Because of their complex structure, the performance difference between a naive implementation of a pipeline and an optimized one is often an order of magnitude. Efficient implementations require optimization of both parallelism and locality, but due to the nature of stencils, there is a fundamental tension between parallelism, locality, and introducing redundant recomputation of shared values. We present a systematic model of the tradeoff space fundamental to stencil pipelines, a schedule representation which describes concrete points in this space for each stage in an image processing pipeline, and an optimizing compiler for the Halide image processing language that synthesizes high performance implementations from a Halide algorithm and a schedule. Combining this compiler with stochastic search over the space of schedules enables terse, composable programs to achieve state-of-the-art performance on a wide range of real image processing pipelines, and across different hardware architectures, including multicores with SIMD, and heterogeneous CPU+GPU execution. From simple Halide programs written in a few hours, we demonstrate performance up to 5x faster than hand-tuned C, intrinsics, and CUDA implementations optimized by experts over weeks or months, for image processing applications beyond the reach of past automatic compilers.
CoCo: coding-based covert timing channels for network flows In this paper, we propose CoCo, a novel framework for establishing covert timing channels. The CoCo covert channel modulates the covert message in the inter-packet delays of the network flows, while a coding algorithm is used to ensure the robustness of the covert message to different perturbations. The CoCo covert channel is adjustable: by adjusting certain parameters one can trade off different features of the covert channel, i.e., robustness, rate, and undetectability. By simulating the CoCo covert channel using different coding algorithms we show that CoCo improves the covert robustness as compared to the previous research, while being practically undetectable.
Design of ultra-wide-load, high-efficient DC-DC buck converters The paper presents the design of a current-mode control DC-DC buck converter with pulse-width modulation (PWM) mode. The converter achieves a current load ranged from 50 mA to 500 mA over 90% efficiency, and the maximum power efficiency is 95.6%, where the circuit was simulated with the TSMC 0.35 um CMOS process. In order to achieve ultra-wide-load high efficiency, this paper implements with two PMOS transistors as switches. Results show that the converter achieves above 90% efficiency at the range from 30 mA to 1200 mA with a maximum efficiency of 96.36%. Results show that, with the additional switch transistor, the current load range is expanded more than double. With two PMOS transistors, the proposed converter can also achieve 3 different load ranges so that it can be programmed for the applications which are operated at those three different load ranges.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.1
0.05
0.033333
0.014286
0.005882
0
0
0
0
0
0
0
0
0
First Order Statistic Based Fast Blind Calibration of Time Skews for Time-Interleaved ADCs A full digital background method is proposed in this brief for timing mismatch calibrations of time-interleaved analog-to-digital converters (TIADCs) with wide sense stationary input signals. Firstly, errors caused by timing mismatches are modeled as additive errors by applying the Taylor series approximation. Then, a mismatch estimation approach based on first-order statistics is proposed. In addition, computations are simplified significantly by using some valuable properties of the signals in TIADC channels. Further, a variable step-size iterative technique is presented to reduce steady-state errors. So the mismatch estimates are guaranteed to converge to actual values within three steps iterations. Theoretical analyses and experiments show that the proposed method has the advantages of low complexity and fast convergence over most other conventional methods. The proposed method is feasible for fast on-line timing mismatch calibrations of TIADCs.
Seven-bit 700-MS/s Four-Way Time-Interleaved SAR ADC With Partial $V_{\mathrm {cm}}$ -Based Switching. This brief presents a 7-bit 700-MS/s four-way time-interleaved successive approximation register (SAR) analog-to-digital converter (ADC). A partial Vcm-based switching method is proposed that requires less digital overhead from the SAR controller and achieves better conversion accuracy. Compared with switchback switching, the proposed method can further reduce the common mode variation by 50%. In ...
Adaptive Blind Timing Mismatch Calibration with Low Power Consumption in M-Channel Time-Interleaved ADC. This paper proposes an adaptive blind calibration scheme to minimize timing mismatch in M-channel time-interleaved analog-to-digital converter (TIADC). By using a derivative filter, the timing mismatch can be calibrated with the coefficients estimated by calculating the average value of the relative timing errors for all sub-ADCs. The coefficients can be estimated by utilizing the filtered-X least-mean square algorithm. The main advantages of the proposed method are that the difficulty of the implementation and the power consumption can be reduced dramatically. Simulation results show the effectiveness of the proposed calibration technique. The design is synthesized on the TSMC-55 nm technology, and the estimated power consumption of digital part is about 4.27 mW with 1.2 V supply voltage. The measurement results of the FPGA validate system show that the proposed calibration can improve the SNR of a four-channel 400 MHz 14-b real TIADC system from 39.82 to 65.13 dB.
A 28-nm 10-b 2.2-GS/s 18.2-mW Relative-Prime Time-Interleaved Sub-Ranging SAR ADC With On-Chip Background Skew Calibration This article presents a relative-prime-based time-interleaved (RP TI) sub-ranging successive-approximation register (SAR) analog-to-digital converter (ADC) with on-chip background skew calibration. The proposed calibration aligns the sampling time of every fine ADC (F-ADC) to that of a particular coarse ADC (C-ADC) that works as a reference ADC. To avoid the unwanted calibration tone from the refe...
A Spectral-Correlation-Based Blind Calibration Method for Time-Interleaved ADCs With the quick expansion of signal bandwidth in communication systems, time-interleaved Analog-to-Digital Converters (TI-ADC) provide quick pathway to very high sampling speed. Nonetheless, mismatches among the sub-ADC channels are major performance limiting factors for TI-ADCs. With the scaling of CMOS processes, all-digital background calibration techniques with blind estimation have been actively pursued in recent years to compensate for the mismatches from TI-ADCs. In this paper, a new all-digital background calibration technique is introduced. The technique achieves blind estimation by exploiting the correlation difference between the signal component and the mismatch spurs. This method can process wideband signals and can be implemented with low calibration overhead. The technique is verified with extensive simulations over a 2GS/s 12bit 16-channel ADC. Simulation shows that, with ADC mismatch spurs over -40dBc, the new technique can reliably suppress the spurs to be lower than -80dBc within the duration of about $1.2\times 10^{6}$ samples with a typical wideband OFDM (orthogonal frequency-division multiplexing) signal.
A High-Precision Time Skew Estimation and Correction Technique for Time-Interleaved ADCs This paper presents an all-digital background calibration technique for the time skew mismatch in time-interleaved ADCs (TIADCs). The technique jointly estimates all of the time skew values by processing the outputs of a bank of correlators. A low-complexity sampling sequence intervention technique, suitable for successive approximation register (SAR) ADC architectures, is proposed to overcome the limitations associated with blind estimation. A two-stage digital correction mechanism based on the Taylor series is proposed to satisfy the target high-precision correction. A quantitative study is performed regarding the requirements imposed on the digital correction circuit in order to satisfy the target performance and yield, and a corresponding filter design method is proposed, which is tailored to meet these requirements. Mitchell’s logarithmic multiplier is adopted for the implementation of the principal multipliers in both the estimation and correction mechanisms, leading to a 25% area and power reduction in the estimation circuit. The proposed calibration is synthesized using a TSMC 28-nm HPL process targeting a 2.4-GHz sampling frequency for an eight-sub-ADC system. The calibration block occupies 0.03 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> and consumes 11 mW. The algorithm maintains the SNDR above 65 dB for a sinusoidal input within the target bandwidth.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Merged Two-Stage Power Converter With Soft Charging Switched-Capacitor Stage in 180 nm CMOS In this paper, we introduce a merged two-stage dc-dc power converter for low-voltage power delivery. By separating the transformation and regulation function of a dc-dc power converter into two stages, both large voltage transformation and high switching frequency can be achieved. We show how the switched-capacitor stage can operate under soft charging conditions by suitable control and integration (merging) of the two stages. This mode of operation enables improved efficiency and/or power density in the switched-capacitor stage. A 5-to-1 V, 0.8 W integrated dc-dc converter has been developed in 180 nm CMOS. The converter achieves a peak efficiency of 81%, with a regulation stage switching frequency of 10 MHz.
Disk Paxos We present an algorithm, called Disk Paxos, for implementing a reliable distributed system with a network of processors and disks. Like the original Paxos algorithm, Disk Paxos maintains consistency in the presence of arbitrary non-Byzantine faults. Progress can be guaranteed as long as a majority of the disks are available, even if all processors but one have failed.
Barrier certificates for nonlinear model validation Methods for model validation of continuous-time nonlinear systems with uncertain parameters are presented in this paper. The methods employ functions of state-parameter-time, termed barrier certificates, whose existence proves that a model and a feasible parameter set are inconsistent with some time-domain experimental data. A very large class of models can be treated within this framework; this includes differential-algebraic models, models with memoryless/dynamic uncertainties, and hybrid models. Construction of barrier certificates can be performed by convex optimization, utilizing recent results on the sum of squares decomposition of multivariate polynomials.
Cache Games -- Bringing Access-Based Cache Attacks on AES to Practice Side channel attacks on cryptographic systems exploit information gained from physical implementations rather than theoretical weaknesses of a scheme. In recent years, major achievements were made for the class of so called access-driven cache attacks. Such attacks exploit the leakage of the memory locations accessed by a victim process. In this paper we consider the AES block cipher and present an attack which is capable of recovering the full secret key in almost real time for AES-128, requiring only a very limited number of observed encryptions. Unlike previous attacks, we do not require any information about the plaintext (such as its distribution, etc.). Moreover, for the first time, we also show how the plaintext can be recovered without having access to the cipher text at all. It is the first working attack on AES implementations using compressed tables. There, no efficient techniques to identify the beginning of AES rounds is known, which is the fundamental assumption underlying previous attacks. We have a fully working implementation of our attack which is able to recover AES keys after observing as little as 100 encryptions. It works against the OpenS SL 0.9.8n implementation of AES on Linux systems. Our spy process does not require any special privileges beyond those of a standard Linux user. A contribution of probably independent interest is a denial of service attack on the task scheduler of current Linux systems (CFS), which allows one to observe (on average) every single memory access of a victim process.
Fully Integrated CMOS Power Amplifier With Efficiency Enhancement at Power Back-Off This paper presents a new approach for power amplifier design using deep submicron CMOS technologies. A transformer based voltage combiner is proposed to combine power generated from several low-voltage CMOS amplifiers. Unlike other voltage combining transformers, the architecture presented in this paper provides greater flexibility to access and control the individual amplifiers in a voltage comb...
A 15.5 dB, wide signal swing, dynamic amplifier using a common-mode voltage detection technique This paper presents a high-speed, low-power and wide signal swing differential dynamic amplifier using a common-mode voltage detection technique. The proposed dynamic amplifier achieves a 15.5 dB gain with less than 1 dB drop over a signal swing of 1.3 Vpp at an operating frequency of 1.5 GHz with a VDD of 1.2 V in 90 nm CMOS. The power consumption of the proposed circuit can be reduced linearly with operating frequency lowering.
A Bidirectional Neural Interface IC With Chopper Stabilized BioADC Array and Charge Balanced Stimulator. We present a bidirectional neural interface with a 4-channel biopotential analog-to-digital converter (bioADC) and a 4-channel current-mode stimulator in 180 nm CMOS. The bioADC directly transduces microvolt biopotentials into a digital representation without a voltage-amplification stage. Each bioADC channel comprises a continuous-time first-order ΔΣ modulator with a chopper-stabilized OTA input ...
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
Cognitive Aeronautical Communication System The paper explores the system and architecture requirements for cognitive driven reconfigurable hardware for an aeronautical platform, such as commercial aircraft or high altitude platforms. With advances in components and processing hardware, mobile platforms are ideal candidates to have configurable hardware that can morph itself, given the location and available wireless service. This paper proposes a system for an intelligent self-configurable software and hardware solution for an aeronautical system.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
A 0.5 V 1.1 MS/sec 6.3 fJ/Conversion-Step SAR-ADC With Tri-Level Comparator in 40 nm CMOS This paper presents an extremely low-voltage operation and power efficient successive-approximation-register (SAR) analog-to-digital converter (ADC). Tri-level comparator is proposed to relax the speed requirement of the comparator and decrease the resolution of internal Digital-to-Analog Converter (DAC) by 1-bit. The internal charge redistribution DAC employs unit capacitance of 0.5 fF and ADC operates at nearly thermal noise limitation. To deal with the problem of capacitor mismatch, reconfigurable capacitor array and calibration procedure were developed. The prototype ADC fabricated using 40 nm CMOS process achieves 46.8 dB SNDR and 58.2 dB SFDR with 1.1 MS/sec at 0.5 V power supply. The FoM is 6.3-fJ/conversion step and the chip die area is only 160 μm × 70 μm.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Robust Tracking Control of Interval Type-2 Positive Takagi–Sugeno Fuzzy Systems With External Disturbance Nonlinear positive systems are ubiquitous in practical applications. The constraint of positive conditions makes the design of this system still quite challenging even for the existing advanced control theory. In addition, when uncertainty and external disturbance also act on the positive nonlinear system at the same time, it will aggravate the complexity of the problem. In response to this problem, for the first time, this article presents a feasible robust tracking control strategy for the positive nonlinear systems with external disturbance and subject to uncertainty. In the proposed control strategy, the positive Takagi–Sugeno (T–S) fuzzy model is used to effectively represent the nonlinearity of the positive system, and the interval type-2 (IT2) fuzzy sets are used to deal with the uncertainty. The focus of tracking control is to design a reasonable IT2 fuzzy controller, which can make the states of the plant follow those of the given stable positive reference system as accurately as possible under the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$H_\infty$</tex-math></inline-formula> performance. Different from the tracking control research of general systems in the existing literature, in the tracking control analysis of positive systems, the generation of nonconvex conditions is inevitable. This article smoothly removes the obstacle of this problem. In addition, this article successfully attempts to relax the conservativeness of the basic analysis conditions by improving IT2 membership-function-dependent method, thereby improving the tracking effect under <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$H_\infty$</tex-math></inline-formula> performance. Finally, a detailed simulation example is given to verify the effectiveness of the proposed strategy.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Filter Design for Positive T–S Fuzzy Continuous-Time Systems With Time Delay Using Piecewise-Linear Membership Functions This article focuses on the filtering problem and stability analysis for positive Takagi-Sugeno (T-S) fuzzy systems with time delay under L <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sub> -induced performance. Due to the importance of estimation of system states but the few filter design results on positive nonlinear systems, it is an attractive and meaningful topic well worth studying. In order to fully exploit and take advantage of the positivity of positive T-S fuzzy systems, many commonly used methods, for instance free-weighting matrix approach and similarity transformation are probably not suitable for positive systems. To address the hard-nut-to-crack problem, an auxiliary variable is introduced so that the augmentation approach can be employed to carry out the positivity and stability analysis of filtering error systems. In addition, another obstacle that cannot be ignored is the existence of nonconvex terms in the stability and positivity conditions. For getting around this barrier, some iterative linear matrix inequality algorithms have been proposed in the literature. However, considering the weakness that these methods cannot guarantee the convergence to a numerical solution and the iterative process is exhaustive, we present an effective matrix decoupling method to convert the nonconvex conditions into convex ones in this article. Furthermore, a linear copositive Lyapunov function, which incorporates the positivity of system states and time delay at the same time is chosen so that the positivity characteristic of filtering error systems can be captured further. However, because of plenty of valuable information of membership functions being ignored, hence, the obtained results are conservative. For the sake of relaxing the conservativeness, the advanced piecewise-linear membership functions approximate method is utilized to facilitate the stability and positivity analysis. Therefore, the relaxed stability and positivity conditions, which are cast as sum of squares (SOS) are obtained and can be solved numerically. Finally, the effectiveness of the designed fuzzy filtering strategy with satisfying L <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sub> -induced performance are demonstrated by a simulation example.
Type-2 Fuzzy Sets and Systems: An Overview [corrected reprint] As originally published in the February 2007 issue of IEEE Computational Intelligence Magazine, the above titled paper (ibid., vol. 2, no. 1, pp. 20-29, Feb 07) contained errors in mathematics that were introduced by the publisher. The corrected version is reprinted in its entirety.
Stability of switched positive linear systems with average dwell time switching. In this paper, the stability analysis problem for a class of switched positive linear systems (SPLSs) with average dwell time switching is investigated. A multiple linear copositive Lyapunov function (MLCLF) is first introduced, by which the sufficient stability criteria in terms of a set of linear matrix inequalities, are given for the underlying systems in both continuous-time and discrete-time contexts. The stability results for the SPLSs under arbitrary switching, which have been previously studied in the literature, can be easily obtained by reducing MLCLF to the common linear copositive Lyapunov function used for the system under arbitrary switching those systems. Finally, a numerical example is given to show the effectiveness and advantages of the proposed techniques.
Output tracking control for a class of continuous-time T-S fuzzy systems This paper investigates the problem of output tracking for nonlinear systems with actuator fault using interval type-2 (IT2) fuzzy model approach. An IT2 state-feedback fuzzy controller is designed to perform the tracking control problem, where the membership functions can be freely chosen since the number of fuzzy rules is different from that of the IT2 T-S fuzzy model. Based on Lyapunov stability theory, an existence condition of IT2 fuzzy H ∞ output tracking controller is obtained to guarantee that the output of the closed-loop IT2 control system can track the output of a given reference model well in the H ∞ sense. Finally, two illustrative examples are given to demonstrate the effectiveness and merits of the proposed design techniques.
Adaptive Fault-Tolerant Tracking Control for Discrete-Time Multiagent Systems via Reinforcement Learning Algorithm This article investigates the adaptive fault-tolerant tracking control problem for a class of discrete-time multiagent systems via a reinforcement learning algorithm. The action neural networks (NNs) are used to approximate unknown and desired control input signals, and the critic NNs are employed to estimate the cost function in the design procedure. Furthermore, the direct adaptive optimal controllers are designed by combining the backstepping technique with the reinforcement learning algorithm. Comparing the existing reinforcement learning algorithm, the computational burden can be effectively reduced by using the method of less learning parameters. The adaptive auxiliary signals are established to compensate for the influence of the dead zones and actuator faults on the control performance. Based on the Lyapunov stability theory, it is proved that all signals of the closed-loop system are semiglobally uniformly ultimately bounded. Finally, some simulation results are presented to illustrate the effectiveness of the proposed approach.
Robust fuzzy tracking control for robotic manipulators In this paper, a stable adaptive fuzzy-based tracking control is developed for robot systems with parameter uncertainties and external disturbance. First, a fuzzy logic system is introduced to approximate the unknown robotic dynamics by using adaptive algorithm. Next, the effect of system uncertainties and external disturbance is removed by employing an integral sliding mode control algorithm. Consequently, a hybrid fuzzy adaptive robust controller is developed such that the resulting closed-loop robot system is stable and the trajectory tracking performance is guaranteed. The proposed controller is appropriate for the robust tracking of robotic systems with system uncertainties. The validity of the control scheme is shown by computer simulation of a two-link robotic manipulator.
A Survey of Reachability and Controllability for Positive Linear Systems. This paper is a survey of reachability and controllability results for discrete-time positive linear systems. It presents a variety of criteria in both algebraic and digraph forms for recognising these fundamental system properties with direct implications not only in dynamic optimization problems (such as those arising in inventory and production control, manpower planning, scheduling and other areas of operations research) but also in studying properties of reachable sets, in feedback control problems, and others. The paper highlights the intrinsic combinatorial structure of reachable/controllable positive linear systems and reveals the monomial components of such systems. The system matrix decomposition into monomial components is demonstrated by solving some illustrative examples.
GloMoSim: a library for parallel simulation of large-scale wireless networks Abstract Anumber,of library-based parallel ,and sequential network,simulators ,have ,been ,designed. This paper describes a library, called GloMoSim (for Global Mobile system Simulator), for parallel simulation of wireless networks. GloMoSim has been designed to be ,extensible and composable: the communication ,protocol stack for wireless networks is divided into a set of layers, each with its own API. Models of protocols at one layer interact with those at a lower (or higher) layer only via these APIs. The modular,implementation,enables consistent comparison,of multiple,protocols ,at a ,given ,layer. The parallel implementation,of GloMoSim ,can be executed ,using a variety of conservative synchronization protocols, which include,the ,null ,message ,and ,conditional ,event algorithms. This paper describes the GloMoSim library, addresses,a number ,of issues ,relevant ,to its parallelization, and presents a set of experimental results onthe IBM 9076 SP, a distributed memory multi- computer. These experiments use models constructed from the library modules. 1,Introduction The,rapid ,advancement ,in portable ,computing platforms and wireless communication,technology has led tosignificant interest in mobile ,computing ,and mobile networking. Two primary forms of mobile ,computing ,are becoming popular: first, mobile computers continue to heavily use wired network infrastructures.Instead of being hardwired to a single location (or IP address), a computer can,dynamically ,move ,to multiple ,locations ,while maintaining,application transparency. Protocols such as
TAG: a Tiny AGgregation service for ad-hoc sensor networks We present the Tiny AGgregation (TAG) service for aggregation in low-power, distributed, wireless environments. TAG allows users to express simple, declarative queries and have them distributed and executed efficiently in networks of low-power, wireless sensors. We discuss various generic properties of aggregates, and show how those properties affect the performance of our in network approach. We include a performance study demonstrating the advantages of our approach over traditional centralized, out-of-network methods, and discuss a variety of optimizations for improving the performance and fault tolerance of the basic solution.
On the evolution of user interaction in Facebook Online social networks have become extremely popular; numerous sites allow users to interact and share content using social links. Users of these networks often establish hundreds to even thousands of social links with other users. Recently, researchers have suggested examining the activity network - a network that is based on the actual interaction between users, rather than mere friendship - to distinguish between strong and weak links. While initial studies have led to insights on how an activity network is structurally different from the social network itself, a natural and important aspect of the activity network has been disregarded: the fact that over time social links can grow stronger or weaker. In this paper, we study the evolution of activity between users in the Facebook social network to capture this notion. We find that links in the activity network tend to come and go rapidly over time, and the strength of ties exhibits a general decreasing trend of activity as the social network link ages. For example, only 30% of Facebook user pairs interact consistently from one month to the next. Interestingly, we also find that even though the links of the activity network change rapidly over time, many graph-theoretic properties of the activity network remain unchanged.
The Quadrature LC Oscillator: A Complete Portrait Based on Injection Locking We show that the quadrature LC oscillator is best treated as two strongly coupled, nominally identical oscillators that are locked to the same frequency. Differential equations that extend Adler&#39;s description of locking to strong injection reveal the full dynamics of this circuit. With a simplifying insight, the analysis reveals all the modes of the oscillator, their stability, the effects of mism...
Permanent-magnets linear actuators applicability in automobile active suspensions Significant improvements in automobile suspension performance are achieved by active systems. However, current active suspension systems are too expensive and complex. Developments occurring in power electronics, permanent magnet materials, and microelectronic systems justifies analysis of the possibility of implementing electromagnetic actuators in order to improve the performance of automobile suspension systems without excessively increasing complexity and cost. In this paper, the layouts of hydraulic and electromagnetic active suspensions are compared. The actuator requirements are calculated, and some experimental results proving that electromagnetic suspension could become a reality in the future are shown.
SPECS: A Lightweight Runtime Mechanism for Protecting Software from Security-Critical Processor Bugs Processor implementation errata remain a problem, and worse, a subset of these bugs are security-critical. We classified 7 years of errata from recent commercial processors to understand the magnitude and severity of this problem, and found that of 301 errata analyzed, 28 are security-critical. We propose the SECURITY-CRITICAL PROCESSOR ER- RATA CATCHING SYSTEM (SPECS) as a low-overhead solution to this problem. SPECS employs a dynamic verification strategy that is made lightweight by limiting protection to only security-critical processor state. As a proof-of- concept, we implement a hardware prototype of SPECS in an open source processor. Using this prototype, we evaluate SPECS against a set of 14 bugs inspired by the types of security-critical errata we discovered in the classification phase. The evaluation shows that SPECS is 86% effective as a defense when deployed using only ISA-level state; incurs less than 5% area and power overhead; and has no software run-time overhead.
Power Efficiency Comparison of Event-Driven and Fixed-Rate Signal Conversion and Compression for Biomedical Applications Energy-constrained biomedical recording systems need power-efficient data converters and good signal compression in order to meet the stringent power consumption requirements of many applications. In literature today, typically a SAR ADC in combination with digital compression is used. Recently, alternative event-driven sampling techniques have been proposed that incorporate compression in the ADC, such as level-crossing A/D conversion. This paper describes the power efficiency analysis of such level-crossing ADC (LCADC) and the traditional fixed-rate SAR ADC with simple compression. A model for the power consumption of the LCADC is derived, which is then compared to the power consumption of the SAR ADC with zero-order hold (ZOH) compression for multiple biosignals (ECG, EMG, EEG, and EAP). The LCADC is more power efficient than the SAR ADC up to a cross-over point in quantizer resolution (for example 8 bits for an EEG signal). This cross-over point decreases with the ratio of the maximum to average slope in the signal of the application. It also changes with the technology and design techniques used. The LCADC is thus suited for low to medium resolution applications. In addition, the event-driven operation of an LCADC results in fewer data to be transmitted in a system application. The event-driven LCADC without timer and with single-bit quantizer achieves a reduction in power consumption at system level of two orders of magnitude, an order of magnitude better than the SAR ADC with ZOH compression. At system level, the LCADC thus offers a big advantage over the SAR ADC.
1.2
0.2
0.2
0.2
0.2
0.1
0.05
0
0
0
0
0
0
0
Variability in TCP round-trip times We measured and analyzed the variability in round trip times (RTTs) within TCP connections using passive measurement techniques. We collected eight hours of bidirectional traces containing over 22 million TCP connections between end-points at a large university campus and almost $1$ million remote locations. Of these, we used over 1 million TCP connections that yield 10 or more valid RTT samples, to examine RTT variability within a TCP connection. Our results indicate that contrary to observations in several previous studies, RTT values within a connection vary widely. Our results have implications for designing better simulation models, and understanding how round trip times affect the dynamic behavior and throughput of TCP connections.
End-to-end routing behavior in the internet The large-scale behavior of routing In the Internet has gone virtually without any formal study, the exceptions being Chinoy's (1993) analysis of the dynamics of Internet routing information, and work, similar in spirit, by Labovitz, Malan, and Jahanian (see Proc. SIGCOMM'97, 1997). We report on an analysis of 40000 end-to-end route measurements conducted using repeated “traceroutes” between 37 Internet sites. We analyze the routing behavior for pathological conditions, routing stability, and routing symmetry. For pathologies, we characterize the prevalence of routing loops, erroneous routing, infrastructure failures, and temporary outages. We find that the likelihood of encountering a major routing pathology more than doubled between the end of 1994 and the end of 1995, rising from 1.5% to 3.3%. For routing stability, we define two separate types of stability, “prevalence”, meaning the overall likelihood that a particular route is encountered, and “persistence”, the likelihood that a route remains unchanged over a long period of time. We find that Internet paths are heavily dominated by a single prevalent route, but that the time periods over which routes persist show wide variation, ranging from seconds up to days. About two-thirds of the Internet paths had routes persisting for either days or weeks. For routing symmetry, we look at the likelihood that a path through the Internet visits at least one different city in the two directions. At the end of 1995, this was the case half the time, and at least one different autonomous system was visited 30% of the time
Why Do Internet Services Fail, and What Can Be Done About It? We describe the architecture, operational practices, and failure characteristics of three very large-scale Internet services. Our research on architecture and operational practices took the form of interviews with architects and operations staff at those (and several other) services. Our research on component and service failure took the form of examining the operations problem tracking databases from two of the services and a log of service failure post-mortem reports from the third. Architecturally, we find convergence on a common structure: division of nodes into service front-ends and back-ends, multiple levels of redundancy and load-balancing, and use of custom-written software for both production services and administrative tools. Operationally, we find a thin line between service developers and operators, and a need to coordinate problem detection and repair across administrative domains. With respect to failures, we find that operator errors are their primary cause, operator error is the most difficult type of failure to mask, service front-ends are responsible for more problems than service back-ends but fewer minutes of unavailability, and that online testing and more thoroughly exposing and detecting component failures could reduce system failure rates for at least one service.
Merging ring-structured overlay indices: toward network-data transparency. Peer-to-peer index structures distributed and managed over the planet, commonly known as structured overlays (e.g., distributed hash tables) have been touted to play the role of a fundamental building block for internet-scale distributed systems. Traditional designs consider incremental or possibly even parallelized construction of a single overlay, which implicitly assumes global control and coordination to enforce the construction of an unique overlay. However, if merger of originally isolated overlays is made possible, then one can realize decentralized bootstrapping of overlays. So to say, smaller overlays can be constructed using any of the traditional mechanisms, which can then organically coalesce to form a larger overlay. Such self-organizational attributes of decentralized bootstrapping and organic growth are of paramount importance for large scale systems. In our previous works, we explained the challenges of merging important families of (tree and ring) structured overlays (Datta and Aberer in international workshop on self-organizing systems, 2006), and identified that tree structured overlays are relatively easier to merge in a transparent manner (Datta in IEEE international conference on self-adaptive and self-organizing systems, 2007). In this paper we investigate how ring structured overlays can be merged, both in terms of the necessary algorithms, as well as how it performs during the merger process, taking into account not only the ‘network’ merger process, but also looking into how and whether this process is ‘transparent’ for applications/end-users accessing and using the overlay as an index. We introduce interesting new metrics to evaluate the merger process and carry out asymptotic analysis for estimating the same, besides conducting simulation experiments to validate the theory as well as measure other aspects of the overlays’ performance under merger.
An analytical study of a structured overlay in the presence of dynamic membership In this paper, we present an analytical study of dynamic membership (aka churn) in structured peer-to-peer networks. We use a fluid model approach to describe steady-state or transient phenomena and apply it to the Chord system. For any rate of churn and stabilization rates and any system size, we accurately account for the functional form of the probability of network disconnection as well as the fraction of failed or incorrect successor and finger pointers. We show how we can use these quantities to predict both the performance and consistency of lookups under churn. All theoretical predictions match simulation results. The analysis includes both features that are generic to structured overlays deploying a ring as well as Chord-specific details and opens the door to a systematic comparative analysis of, at least, ring-based structured overlay systems under churn.
Controlling the cost of reliability in peer-to-peer overlays Structured peer-to-peer overlay networks provide a useful substrate for building distributed applications but there are general concerns over the cost of maintaining these overlays. The current approach is to configure the overlays statically and conservatively to achieve the desired reliability even under uncommon adverse conditions. This results in high cost in the common case, or poor reliability in worse than expected conditions. We analyze the cost of overlay maintenance in realistic dynamic environments and design novel techniques to reduce this cost by adapting to the operating conditions. With our techniques, the concerns over the overlay maintenance cost are no longer warranted. Simulations using real traces show that they enable high reliability and performance even in very adverse conditions with low maintenance cost.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
Efficient Processing of Deep Neural Networks: A Tutorial and Survey. Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and throughput without...
On implementing omega with weak reliability and synchrony assumptions We study the feasibility and cost of implementing Ω---a fundamental failure detector at the core of many algorithms---in systems with weak reliability and synchrony assumptions. Intuitively, Ω allows processes to eventually elect a common leader. We first give an algorithm that implements Ω in a weak system S where processes are synchronous, but: (a) any number of them may crash, and (b) only the output links of an unknown correct process are eventually timely (all other links can be asynchronous and/or lossy). This is in contrast to previous implementations of Ω which assume that a quadratic number of links are eventually timely, or systems that are strong enough to implement the eventually perfect failure detector P. We next show that implementing Ω in S is expensive: even if we want an implementation that tolerates just one process crash, all correct processes (except possibly one) must send messages forever; moreover, a quadratic number of links must carry messages forever. We then show that with a small additional assumption---the existence of some unknown correct process whose asynchronous links are lossy but fair---we can implement Ω efficiently: we give an algorithm for Ω such that eventually only one process (the elected leader) sends messages.
Reconstruction of Nonuniformly Sampled Bandlimited Signals Using a Differentiator–Multiplier Cascade This paper considers the problem of reconstructing a bandlimited signal from its nonuniform samples. Based on a discrete-time equivalent model for nonuniform sampling, we propose the differentiator-multiplier cascade, a multistage reconstruction system that recovers the uniform samples from the nonuniform samples. Rather than using optimally designed reconstruction filters, the system improves the...
A Dht-Based Discovery Service For The Internet Of Things Current trends towards the Future Internet are envisaging the conception of novel services endowed with context-aware and autonomic capabilities to improve end users' quality of life. The Internet of Things paradigm is expected to contribute towards this ambitious vision by proposing models and mechanisms enabling the creation of networks of "smart things" on a large scale. It is widely recognized that efficient mechanisms for discovering available resources and capabilities are required to realize such vision. The contribution of this work consists in a novel discovery service for the Internet of Things. The proposed solution adopts a peer-to-peer approach for guaranteeing scalability, robustness, and easy maintenance of the overall system. While most existing peer-to-peer discovery services proposed for the IoT support solely exact match queries on a single attribute (i. e., the object identifier), our solution can handle multiattribute and range queries. We defined a layered approach by distinguishing three main aspects: multiattribute indexing, range query support, peer-to-peer routing. We chose to adopt an over-DHT indexing scheme to guarantee ease of design and implementation principles. We report on the implementation of a Proof of Concept in a dangerous goods monitoring scenario, and, finally, we discuss test results for structural properties and query performance evaluation.
A Highly Adaptive Leader Election Algorithm for Mobile Ad Hoc Networks.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.103556
0.107111
0.107111
0.107111
0.071259
0.025262
0.003798
0.000042
0
0
0
0
0
0
Computing size-independent matrix problems on systolic array processors A methodology to transform dense to band matrices is presented in this paper. This transformation, is accomplished by triangular blocks partitioning, and allows the implementation of solutions to problems with any given size, by means of contraflow systolic arrays, originally proposed by H.T. Kung. Matrix-vector and matrix-matrix multiplications are the operations considered here.The proposed transformations allow the optimal utilization of processing elements (PEs) of the systolic array when dense matrix are operated. Every computation is made inside the array by using adequate feedback. The feedback delay time depends only on the systolic array size.
MPFR: A multiple-precision binary floating-point library with correct rounding This article presents a multiple-precision binary floating-point library, written in the ISO C language, and based on the GNU MP library. Its particularity is to extend to arbitrary-precision, ideas from the IEEE 754 standard, by providing correct rounding and exceptions. We demonstrate how these strong semantics are achieved---with no significant slowdown with respect to other arbitrary-precision tools---and discuss a few applications where such a library can be useful.
Tapered Floating Point: A New Floating-Point Representation It is well known that there is a possible tradeoff in the binary representation of floating-point numbers in which one bit of accuracy can be gained at the cost of halving the exponent range, and vice versa. A way in which the exponent range can be greatly increased while preserving full accuracy for most computations is suggested.
Formal description of systolic algorithms and an analysis of the information flow.
Evaluating the Numerical Stability of Posit Arithmetic The Posit number format has been proposed by John Gustafson as an alternative to the IEEE 754 standard floatingpoint format. Posits offer a unique form of tapered precision whereas IEEE floating-point numbers provide the same relative precision across most of their representational range. Posits are argued to have a variety of advantages including better numerical stability and simpler exception handling.The objective of this paper is to evaluate the numerical stability of Posits for solving linear systems where we evaluate Conjugate Gradient Method to demonstrate an iterative solver and Cholesky-Factorization to demonstrate a direct solver. We show that Posits do not consistently improve stability across a wide range of matrices, but we demonstrate that a simple rescaling of the underlying matrix improves convergence rates for Conjugate Gradient Method and reduces backward error for Cholesky Factorization. We also demonstrate that 16-bit Posit outperforms Float16 for mixed precision iterative refinement - especially when used in conjunction with a recently proposed matrix re-scaling strategy proposed by Nicholas Higham.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Directed diffusion: a scalable and robust communication paradigm for sensor networks Advances in processor, memory and radio technology will enable small and cheap nodes capable of sensing, communication and computation. Networks of such nodes can coordinate to perform distributed sensing of environmental phenomena. In this paper, we explore the directed diffusion paradigm for such coordination. Directed diffusion is datacentric in that all communication is for named data. All nodes in a directed diffusion-based network are application-aware. This enables diffusion to achieve energy savings by selecting empirically good paths and by caching and processing data in-network. We explore and evaluate the use of directed diffusion for a simple remote-surveillance sensor network.
Controllability and observability of Boolean control networks The controllability and observability of Boolean control networks are investigated. After a brief review on converting a logic dynamics to a discrete-time linear dynamics with a transition matrix, some formulas are obtained for retrieving network and its logical dynamic equations from this network transition matrix. Based on the discrete-time dynamics, the controllability via two kinds of inputs is revealed by providing the corresponding reachable sets precisely. Then the problem of observability is also solved by giving necessary and sufficient conditions.
From few to many: illumination cone models for face recognition under variable lighting and pose We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render驴or synthesize驴images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone (based on Euclidean distance within the image space). We test our face recognition method on 4,050 images from the Yale Face Database B; these images contain 405 viewing conditions (9 poses $\times$ 45 illumination conditions) for 10 individuals. The method performs almost without error, except on the most extreme lighting directions, and significantly outperforms popular recognition methods that do not use a generative model.
SPONGENT: a lightweight hash function This paper proposes spongent - a family of lightweight hash functions with hash sizes of 88 (for preimage resistance only), 128, 160, 224, and 256 bits based on a sponge construction instantiated with a present-type permutation, following the hermetic sponge strategy. Its smallest implementations in ASIC require 738, 1060, 1329, 1728, and 1950 GE, respectively. To our best knowledge, at all security levels attained, it is the hash function with the smallest footprint in hardware published so far, the parameter being highly technology dependent. spongent offers a lot of flexibility in terms of serialization degree and speed. We explore some of its numerous implementation trade-offs. We furthermore present a security analysis of spongent. Basing the design on a present-type primitive provides confidence in its security with respect to the most important attacks. Several dedicated attack approaches are also investigated.
MicroGP—An Evolutionary Assembly Program Generator This paper describes 驴GP, an evolutionary approach for generating assembly programs tuned for a specific microprocessor. The approach is based on three clearly separated blocks: an evolutionary core, an instruction library and an external evaluator. The evolutionary core conducts adaptive population-based search. The instruction library is used to map individuals to valid assembly language programs. The external evaluator simulates the assembly program, providing the necessary feedback to the evolutionary core. 驴GP has some distinctive features that allow its use in specific contexts. This paper focuses on one such context: test program generation for design validation of microprocessors. Reported results show 驴GP being used to validate a complex 5-stage pipelined microprocessor. Its induced test programs outperform an exhaustive functional test and an instruction randomizer, showing that engineers are able to automatically obtain high-quality test programs.
A Delay-Locked Loop Synchronization Scheme for High-Frequency Multiphase Hysteretic DC-DC Converters This paper reports a delay-locked loop (DLL) based hysteretic controller for high-frequency multiphase dc-dc buck converters. The DLL control loop employs the switching frequency of a hysteretic comparator as reference to automatically synchronize the remaining phases and eliminate the need for external synchronization. A dedicated duty cycle control loop is used to enable current sharing and ripple cancellation. We demonstrate a four-phase high-frequency buck converter that operates at 25-70 MHz with fast hysteretic control and output conversion range of 17.5%-80%. The converter achieves an efficiency of 83% at 2 W and 80% at 3.3 W. The circuit has been implemented in standard 0.5 mum 5 V CMOS process.
ΣΔ ADC with fractional sample rate conversion for software defined radio receiver.
A Hybrid 1<sup>st</sup>/2<sup>nd</sup>-Order VCO-Based CTDSM With Rail-to-Rail Artifact Tolerance for Bidirectional Neural Interface Bi-directional brain machine interfaces enable simultaneous brain activity monitoring and neural modulation. However stimulation artifact can saturate instrumentation front-end, while concurrent on-site recording is needed. This brief presents a voltage controlled-oscillator (VCO) based continuous-time <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\rm \Delta \Sigma $ </tex-math></inline-formula> modulator (CTDSM) with rail-to-rail input range and fast artifact tracking. A hybrid <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}/2^{nd}$ </tex-math></inline-formula> -order loop is designed to achieve high dynamic range (DR) and large input range. Stimulation artifact is detected by a phase counter and compensated by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}$ </tex-math></inline-formula> -order loop. The residue signal is digitized by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2^{nd}$ </tex-math></inline-formula> -order loop for high precision. Redundancy between the two loops is implemented as feedback capacitors elements with non-binary ratio to guarantee feedback stability and linearity. Fabricated in a 55-nm CMOS process, the prototype achieves 65.7dB SNDR across 10 kHz bandwidth with a full scale of 200 mV <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">pp</sub> . And a ±1.2 V input range is achieved to suppress artifacts. Saline based experiment with simultaneous stimulation and recording demonstrates that the implemented system can track and tolerate rail-to-rail stimulation artifact within 30 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{s}$ </tex-math></inline-formula> while small neural signal can be continuously monitored.
1.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
0
Class-D CMOS Oscillators. This paper presents class-D CMOS oscillators capable of an excellent phase noise performance from a very low power supply voltage. Starting from the recognition of the time-variant nature of the class-D LC tank, accurate expressions of the oscillation frequency, oscillation amplitude, current consumption, phase noise, and figure-of-merit (FoM) have been derived. Compared with the commonly used cla...
Implicit Common-Mode Resonance in LC Oscillators. The performance of a differential LC oscillator can be enhanced by resonating the common mode of the circuit at twice the oscillation frequency. When this technique is correctly employed, Q-degradation due to the triode operation of the differential pair is eliminated and flicker noise is nulled. Until recently, one or more tail inductors have been used to achieve this common-mode resonance. In th...
A Noise Circulating Oscillator This paper presents a noise circulating cross-coupled voltage-controlled oscillator (VCO) topology with a transformer-based tank. The introduced noise circulating active core greatly suppresses the effective noise power from the active devices while offering the same amount of negative resistance compared to conventional cross-coupled VCO topologies. The mechanism of noise circulation is investigated with theoretical analysis and further verified by simulation. Due to the broadband nature of the noise circulating technique, the resulting VCO phase noise in both 1/ <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$f^{2}$ </tex-math></inline-formula> and 1/ <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$f^{3}$ </tex-math></inline-formula> regions is greatly improved over a wide frequency tuning range. A prototype VCO at 2.35 GHz is implemented in a standard 130-nm bulk CMOS process with 0.36-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> core area. It draws 2.15 mA from a 1.2-V supply. The measured figure-of-merit (FoM) is 193.1/195.0/195.6 dBc/Hz at 100k/1M/10MHz offsets with a 1/ <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$f^{3}$ </tex-math></inline-formula> phase noise corner of only 50 kHz. The VCO design consistently achieves >192.8 dBc/Hz FoM at 100k/1M/10MHz offsets and <60 kHz 1/ <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$f^{3}$ </tex-math></inline-formula> phase noise corner over its entire 18.6% frequency tuning range (2.05 − 2.47 GHz). It also exhibits low supply frequency pushing of −25 and −13 MHz/V at the highest and lowest frequencies, respectively.
Design of Symmetrical Class E Power Amplifiers for Very Low Harmonic-Content Applications Class E power amplifier circuits are very suitable for high efficiency power amplification applications in the radio-frequency and microwave ranges. However, due to the inherent asymmetrical driving arrangement, they suffer significant harmonic contents in the output voltage and current, and usually require substantial design efforts in achieving the desired load matching networks for applications requiring very low harmonic contents. In this paper, the design of a Class E power amplifier with resonant tank being symmetrically driven by two Class E circuits is studied. The symmetrical Class E circuit, under nominal operating conditions, has extremely low harmonic distortions, and the design of the impedance matching network for harmonic filtering becomes less critical. Practical steady-state design equations for Class E operation are derived and graphically presented. Experimental circuits are constructed for distortion evaluation. It has been found that this circuit offers total harmonic distortions which are about an order of magnitude lower than those of the conventional Class E power amplifier.
Analysis of Circuit Noise and Non-Ideal Filtering Impact on Energy Detection Based Ultra-Low-Power Radios Performance. With the coming of age of the Internet of Things, demand on ultra-low power (ULP) radios will continue to boost tremendously. Circuit imperfections, especially in power hungry blocks, i.e., the local oscillators (LO) and band pass filters (BPFs), pose a real challenge for ULP radios designers considering their tight power budget. This brief presents an investigation on the effects of circuit non-i...
A 3.36-GHz Locking-Tuned Type-I Sampling PLL With −78.6-dBc Reference Spur Merging Single-Path Reference-Feedthrough-Suppression and Narrow-Pulse-Shielding Techniques This brief describes a type-I analog sampling phase-locked loop (S-PLL) featuring reference-feedthrough-suppression and narrow-pulse-shielding techniques in a single path to improve the reference (REF) spur. Specifically, we realize the former by inserting a T-shape switch with one center-tap ground, while the latter tackles the voltage ripple caused by the sampling non-idealities. Also, we can tu...
The Quadrature LC Oscillator: A Complete Portrait Based on Injection Locking We show that the quadrature LC oscillator is best treated as two strongly coupled, nominally identical oscillators that are locked to the same frequency. Differential equations that extend Adler&#39;s description of locking to strong injection reveal the full dynamics of this circuit. With a simplifying insight, the analysis reveals all the modes of the oscillator, their stability, the effects of mism...
A Reference-Less Single-Loop Half-Rate Binary CDR This paper proposes a half-rate single-loop reference-less binary CDR that operates from 8.5 Gb/s to 12.1 Gb/s (36% capture range). The high capture range is made possible by adding a novel frequency detection mechanism which limits the magnitude of the phase error between the input data and the VCO clock. The proposed frequency detector produces three phases of the data, and feeds into the phase detector the data phase that minimizes the CDR phase error. This frequency detector, implemented within a 10 Gb/s CDR in Fujitsu's 65 nm CMOS, consumes 11 mW and improves the capture range by up to 6 when it is activated.
Scratchpad memory: design alternative for cache on-chip memory in embedded systems In this paper we address the problem of on-chip memory selection for computationally intensive applications, by proposing scratch pad memory as an alternative to cache. Area and energy for different scratch pad and cache sizes are computed using the CACTI tool while performance was evaluated using the trace results of the simulator. The target processor chosen for evaluation was AT91M40400. The results clearly establish scratehpad memory as a low power alternative in most situations with an average energy reducation of 40%. Further the average area-time reduction for the seratchpad memory was 46% of the cache memory.
An Ultra-Wide-Band 0.4-10-Ghz Lna In 0.18-Mu M Cmos A two-stage ultra-wide-band CMOS low-noise amplifier (LNA). is presented. With the common-gate configuration employed as the input stage, the broad-band input matching is obtained and the noise does not rise rapidly at higher frequency. By combining the common-gate and common-source stages, the broad-band characteristic and small area are achieved by using two inductors. This LNA has been fabricated in a 0.18-mu m CMOS process. The measured power gain is 11.2-12.4 dB and noise figure is 4.4-6.5 dB with -3-dB bandwidth of 0.4-10 GHz. The measured HP3 is - 6 dBm at 6 GHz. It consumes 12 mW from a 1.8-V supply voltage and occupies only 0.42 mm(2).
Architecture Aware Partitioning Algorithms Existing partitioning algorithms provide limited support for load balancing simulations that are performed on heterogeneous parallel computing platforms. On such architectures, effec- tive load balancing can only be achieved if the graph is distributed so that it properly takes into account the available resources (CPU speed, network bandwidth). With heterogeneous tech- nologies becoming more popular, the need for suitable graph partitioning algorithms is criti- cal. We developed such algorithms that can address the partitioning requirements of scientific computations, and can correctly model the architectural characteristics of emerging hardware platforms.
Feature selection for medical diagnosis: Evaluation for cardiovascular diseases Machine learning has emerged as an effective medical diagnostic support system. In a medical diagnosis problem, a set of features that are representative of all the variations of the disease are necessary. The objective of our work is to predict more accurately the presence of cardiovascular disease with reduced number of attributes. We investigate intelligent system to generate feature subset with improvement in diagnostic performance. Features ranked with distance measure are searched through forward inclusion, forward selection and backward elimination search techniques to find subset that gives improved classification result. We propose hybrid forward selection technique for cardiovascular disease diagnosis. Our experiment demonstrates that this approach finds smaller subsets and increases the accuracy of diagnosis compared to forward inclusion and back-elimination techniques.
Computing the Dynamic Diameter of Non-Deterministic Dynamic Networks is Hard. A dynamic network is a communication network whose communication structure can evolve over time. The dynamic diameter is the counterpart of the classical static diameter, it is the maximum time needed for a node to causally influence any other node in the network. We consider the problem of computing the dynamic diameter of a given dynamic network. If the evolution is known a priori, that is if the network is deterministic, it is known it is quite easy to compute this dynamic diameter. If the evolution is not known a priori, that is if the network is non-deterministic, we show that the problem is hard to solve or approximate. In some cases, this hardness holds also when there is a static connected subgraph for the dynamic network. In this note, we consider an important subfamily of non-deterministic dynamic networks: the time-homogeneous dynamic networks. We prove that it is hard to compute and approximate the value of the dynamic diameter for time-homogeneous dynamic networks.
Robust Biopotential Acquisition via a Distributed Multi-Channel FM-ADC. This contribution presents an active electrode system for biopotential acquisition using a distributed multi-channel FM-modulated analog front-end and ADC architecture. Each electrode captures one biopotential signal and converts to a frequency modulated signal using a VCO tuned to a unique frequency. Each electrode then buffers its output onto a shared analog line that aggregates all of the FM-mo...
1.0525
0.055
0.055
0.05
0.05
0.03
0.0125
0.0025
0
0
0
0
0
0
Type-2 Fuzzy Sets and Systems: An Overview [corrected reprint] As originally published in the February 2007 issue of IEEE Computational Intelligence Magazine, the above titled paper (ibid., vol. 2, no. 1, pp. 20-29, Feb 07) contained errors in mathematics that were introduced by the publisher. The corrected version is reprinted in its entirety.
Stability of switched positive linear systems with average dwell time switching. In this paper, the stability analysis problem for a class of switched positive linear systems (SPLSs) with average dwell time switching is investigated. A multiple linear copositive Lyapunov function (MLCLF) is first introduced, by which the sufficient stability criteria in terms of a set of linear matrix inequalities, are given for the underlying systems in both continuous-time and discrete-time contexts. The stability results for the SPLSs under arbitrary switching, which have been previously studied in the literature, can be easily obtained by reducing MLCLF to the common linear copositive Lyapunov function used for the system under arbitrary switching those systems. Finally, a numerical example is given to show the effectiveness and advantages of the proposed techniques.
A simple graph theoretic characterization of reachability for positive linear systems In this paper we consider discrete-time linear positive systems, that is systems defined by a pair (A,B) of non-negative matrices. We study the reachability of such systems which in this case amounts to the freedom of steering the state in the positive orthant by using non-negative control sequences. This problem was solved recently [Canonical forms for positive discrete-time linear control systems, Linear Algebra Appl., 310 (2000) 49]. However we derive here necessary and sufficient conditions for reachability in a simpler and more compact form. These conditions are expressed in terms of particular paths in the graph which is naturally associated with the system.
High order α-planes integration: A new approach to computational cost reduction of General Type-2 Fuzzy Systems. Nowadays, there are different representations of Generalized Type-2 Fuzzy Sets that consider a non-uniform distribution of the uncertainty, for example, the Geometric approach, the Z-Slices method and the α-planes approximation. Each representation has advantages and disadvantages, however, the present work is focused on the α-planes representation, and this representation consists on realizing a horizontal discretization, the solution of each horizontal slice, and then the integration of these planes. Each horizontal slice results in an Interval Type-2 Fuzzy System, so, the computational cost is proportional to the discretization level, which means, is proportional to the number of α-planes used for modeling the Generalized Type-2 FS. The aim of this work is reducing the computational cost of Generalized Type-2 FS by a new approach of α-planes representation. In this paper the Newton–Cotes quadrature for the α-planes integration is proposed, achieving in this way a high-level discrete integration compared with the conventional α-planes integration. The proposed approach aims at reducing the number of α-planes necessary to obtain a good approximation of Generalized Type-2 FS. In order to validate the proposed approach, a set of experiments was realized with a randomly generated Generalized Type-2 Fuzzy Sets, and they are realized with a different number of α-planes in order to compare the performance of the proposed approach with respect to the conventional approach. On the other hand, the proposed approach was also applied to a control problem, as an example of applications of the proposed approach to real-world problems.
Finite-Time Consensus Tracking Neural Network FTC of Multi-Agent Systems The finite-time consensus fault-tolerant control (FTC) tracking problem is studied for the nonlinear multi-agent systems (MASs) in the nonstrict feedback form. The MASs are subject to unknown symmetric output dead zones, actuator bias and gain faults, and unknown control coefficients. According to the properties of the neural network (NN), the unstructured uncertainties problem is solved. The Nussbaum function is used to address the output dead zones and unknown control directions problems. By introducing an arbitrarily small positive number, the “singularity” problem caused by combining the finite-time control and backstepping design is solved. According to the backstepping design and Lyapunov stability theory, a finite-time adaptive NN FTC controller is obtained, which guarantees that the tracking error converges to a small neighborhood of zero in a finite time, and all signals in the closed-loop system are bounded. Finally, the effectiveness of the proposed method is illustrated via a physical example.
Robust fuzzy tracking control for robotic manipulators In this paper, a stable adaptive fuzzy-based tracking control is developed for robot systems with parameter uncertainties and external disturbance. First, a fuzzy logic system is introduced to approximate the unknown robotic dynamics by using adaptive algorithm. Next, the effect of system uncertainties and external disturbance is removed by employing an integral sliding mode control algorithm. Consequently, a hybrid fuzzy adaptive robust controller is developed such that the resulting closed-loop robot system is stable and the trajectory tracking performance is guaranteed. The proposed controller is appropriate for the robust tracking of robotic systems with system uncertainties. The validity of the control scheme is shown by computer simulation of a two-link robotic manipulator.
A Survey of Reachability and Controllability for Positive Linear Systems. This paper is a survey of reachability and controllability results for discrete-time positive linear systems. It presents a variety of criteria in both algebraic and digraph forms for recognising these fundamental system properties with direct implications not only in dynamic optimization problems (such as those arising in inventory and production control, manpower planning, scheduling and other areas of operations research) but also in studying properties of reachable sets, in feedback control problems, and others. The paper highlights the intrinsic combinatorial structure of reachable/controllable positive linear systems and reveals the monomial components of such systems. The system matrix decomposition into monomial components is demonstrated by solving some illustrative examples.
GloMoSim: a library for parallel simulation of large-scale wireless networks Abstract Anumber,of library-based parallel ,and sequential network,simulators ,have ,been ,designed. This paper describes a library, called GloMoSim (for Global Mobile system Simulator), for parallel simulation of wireless networks. GloMoSim has been designed to be ,extensible and composable: the communication ,protocol stack for wireless networks is divided into a set of layers, each with its own API. Models of protocols at one layer interact with those at a lower (or higher) layer only via these APIs. The modular,implementation,enables consistent comparison,of multiple,protocols ,at a ,given ,layer. The parallel implementation,of GloMoSim ,can be executed ,using a variety of conservative synchronization protocols, which include,the ,null ,message ,and ,conditional ,event algorithms. This paper describes the GloMoSim library, addresses,a number ,of issues ,relevant ,to its parallelization, and presents a set of experimental results onthe IBM 9076 SP, a distributed memory multi- computer. These experiments use models constructed from the library modules. 1,Introduction The,rapid ,advancement ,in portable ,computing platforms and wireless communication,technology has led tosignificant interest in mobile ,computing ,and mobile networking. Two primary forms of mobile ,computing ,are becoming popular: first, mobile computers continue to heavily use wired network infrastructures.Instead of being hardwired to a single location (or IP address), a computer can,dynamically ,move ,to multiple ,locations ,while maintaining,application transparency. Protocols such as
TAG: a Tiny AGgregation service for ad-hoc sensor networks We present the Tiny AGgregation (TAG) service for aggregation in low-power, distributed, wireless environments. TAG allows users to express simple, declarative queries and have them distributed and executed efficiently in networks of low-power, wireless sensors. We discuss various generic properties of aggregates, and show how those properties affect the performance of our in network approach. We include a performance study demonstrating the advantages of our approach over traditional centralized, out-of-network methods, and discuss a variety of optimizations for improving the performance and fault tolerance of the basic solution.
On the evolution of user interaction in Facebook Online social networks have become extremely popular; numerous sites allow users to interact and share content using social links. Users of these networks often establish hundreds to even thousands of social links with other users. Recently, researchers have suggested examining the activity network - a network that is based on the actual interaction between users, rather than mere friendship - to distinguish between strong and weak links. While initial studies have led to insights on how an activity network is structurally different from the social network itself, a natural and important aspect of the activity network has been disregarded: the fact that over time social links can grow stronger or weaker. In this paper, we study the evolution of activity between users in the Facebook social network to capture this notion. We find that links in the activity network tend to come and go rapidly over time, and the strength of ties exhibits a general decreasing trend of activity as the social network link ages. For example, only 30% of Facebook user pairs interact consistently from one month to the next. Interestingly, we also find that even though the links of the activity network change rapidly over time, many graph-theoretic properties of the activity network remain unchanged.
The Quadrature LC Oscillator: A Complete Portrait Based on Injection Locking We show that the quadrature LC oscillator is best treated as two strongly coupled, nominally identical oscillators that are locked to the same frequency. Differential equations that extend Adler&#39;s description of locking to strong injection reveal the full dynamics of this circuit. With a simplifying insight, the analysis reveals all the modes of the oscillator, their stability, the effects of mism...
Permanent-magnets linear actuators applicability in automobile active suspensions Significant improvements in automobile suspension performance are achieved by active systems. However, current active suspension systems are too expensive and complex. Developments occurring in power electronics, permanent magnet materials, and microelectronic systems justifies analysis of the possibility of implementing electromagnetic actuators in order to improve the performance of automobile suspension systems without excessively increasing complexity and cost. In this paper, the layouts of hydraulic and electromagnetic active suspensions are compared. The actuator requirements are calculated, and some experimental results proving that electromagnetic suspension could become a reality in the future are shown.
SPECS: A Lightweight Runtime Mechanism for Protecting Software from Security-Critical Processor Bugs Processor implementation errata remain a problem, and worse, a subset of these bugs are security-critical. We classified 7 years of errata from recent commercial processors to understand the magnitude and severity of this problem, and found that of 301 errata analyzed, 28 are security-critical. We propose the SECURITY-CRITICAL PROCESSOR ER- RATA CATCHING SYSTEM (SPECS) as a low-overhead solution to this problem. SPECS employs a dynamic verification strategy that is made lightweight by limiting protection to only security-critical processor state. As a proof-of- concept, we implement a hardware prototype of SPECS in an open source processor. Using this prototype, we evaluate SPECS against a set of 14 bugs inspired by the types of security-critical errata we discovered in the classification phase. The evaluation shows that SPECS is 86% effective as a defense when deployed using only ISA-level state; incurs less than 5% area and power overhead; and has no software run-time overhead.
Power Efficiency Comparison of Event-Driven and Fixed-Rate Signal Conversion and Compression for Biomedical Applications Energy-constrained biomedical recording systems need power-efficient data converters and good signal compression in order to meet the stringent power consumption requirements of many applications. In literature today, typically a SAR ADC in combination with digital compression is used. Recently, alternative event-driven sampling techniques have been proposed that incorporate compression in the ADC, such as level-crossing A/D conversion. This paper describes the power efficiency analysis of such level-crossing ADC (LCADC) and the traditional fixed-rate SAR ADC with simple compression. A model for the power consumption of the LCADC is derived, which is then compared to the power consumption of the SAR ADC with zero-order hold (ZOH) compression for multiple biosignals (ECG, EMG, EEG, and EAP). The LCADC is more power efficient than the SAR ADC up to a cross-over point in quantizer resolution (for example 8 bits for an EEG signal). This cross-over point decreases with the ratio of the maximum to average slope in the signal of the application. It also changes with the technology and design techniques used. The LCADC is thus suited for low to medium resolution applications. In addition, the event-driven operation of an LCADC results in fewer data to be transmitted in a system application. The event-driven LCADC without timer and with single-bit quantizer achieves a reduction in power consumption at system level of two orders of magnitude, an order of magnitude better than the SAR ADC with ZOH compression. At system level, the LCADC thus offers a big advantage over the SAR ADC.
1.2
0.2
0.2
0.2
0.2
0.1
0.05
0
0
0
0
0
0
0
An Optimized Driver for SiC JFET-Based Switches Enabling Converter Operation With More Than 99% Efficiency This paper presents the single channel galvanically isolated gate driver optimized for driving a normally-on silicon carbide junction field effect transistor (SiC JFET) also presented at ISSCC 2012 [1]. The idea of the chosen direct drive JFET concept is to switch power with a normally-on SiC JFET, using the high voltage breakdown capability of the SiC JFET and ensuring a safe normally-off behavior using a normally-off low voltage MOSFET in series. By controlling the transistor gates individually the JFET can be driven with minimum switching losses and good control. The driver makes the operation and handling of normally-on SiC JFETs as safe as normally-off switches, thereby simplifying the integration of normally-on SiC JFETs into systems like switch mode power supplies. To transfer the signal from the controller to the driver over a needed galvanic isolation of 1700 V a two chip solution with a coreless transformer arranged in one package was chosen, using a 0.6 μm BiCMOS and a 0.8 μm BCD technology. Powering of the gate driver through bootstrapping has been made possible, due to the direct drive JFET concept and the further integration of supervision and control circuitry and a negative voltage regulator that regulates the entire p-substrate of the driver chip. Tested together with the JFET in a buck converter efficiencies over 99% have been measured.
Stacked-Chip Implementation of On-Chip Buck Converter for Distributed Power Supply System in SiPs An on-chip buck converter which is implemented by stacking chips and suitable for on-chip distributed power supply systems is proposed. The operation of the converter with 3-D chip stacking is experimentally verified for the first time. The manufactured converter achieves a maximum power efficiency of 62% for an output current of 70 mA and a voltage conversion ratio of 0.7 with a switching frequen...
Redundant High-Density High-Efficiency Double-Conversion Uninterruptible Power System This paper presents a novel design for double-conversion uninterruptible power supply (UPS) systems that results in higher power density, higher efficiency, and more flexible parallelism for higher reliability. This design comprises a three-level circuit topology, a custom power semiconductor module, and a cross current sensorless parallel control. The innovative circuit topology results in a UPS with higher efficiency and dramatic size/weight reduction and confirms the advantages of three-level circuits in low-voltage applications. Higher reliability uninterruptible power system using parallel redundant UPS modules employs a novel cross current sensorless control without compromising the performance in comparison with single-module systems. Test results using single- and multiple-module systems confirm the improvement in efficiency and the high performance of this UPS. This novel design is hoped to contribute to greening information technology facilities by realizing high reliability and high efficiency at the same time.
An 18 V Input 10 MHz Buck Converter With 125 ps Mixed-Signal Dead Time Control. A highly integrated synchronous buck converter with a predictive dead time control for input voltages >18 V with 10 MHz switching frequency is presented. A high resolution dead time of ~125 ps allows to reduce dead time dependent losses without requiring body diode conduction to evaluate the dead time. High resolution is achieved by frequency compensated sampling of the switching node and by an 8 ...
Fully Integrated Dual-Channel Gate Driver and Area Efficient PID Compensator for Surge Tolerant Power Sensor Interface This paper presents a power sensor interface (PSI) driving a wide range of load valves with coil resistance ranging from 5 Ω to 62.5 Ω. It can sustain voltage surges up to 115 V. An integrated high-voltage (HV) dual-channel gate driver with 8.7 ns deadtime (DT) is implemented to efficiently drive e-GaN FETs in a synchronous DC-DC buck converter. In addition, an area-optimization technique of on-chip passive components is proposed to enable full-integration of voltage-mode controller with 87% reduction in area. The achieved peak efficiency is 98.7% @ 4.67 A load. The feedback bandwidth is ~100 kHz to maintain fast transient response at 1MHz switching frequency. The settling time is <; 70 μs and <; 100 μs at load changes of -5.2 A and 5.2 A respectively. Overshoot (OS) and undershoot (US) voltages are within 2% of nominal V <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">OUT</sub> . The layout of the gate driver and feedback control is implemented in 0.35-μm HV CMOS process with active die area of 0.75 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> .
Algebraic Series-Parallel-Based Switched-Capacitor DC-DC Boost Converter With Wide Input Voltage Range and Enhanced Power Density. This article presents an algebraic series-parallel (ASP) topology for fully integrated switched-capacitor (SC) dc-dc boost converters with flexible fractional voltage conversion ratios (VCRs). By elaborating the output voltage (VOUT) expression into a specific algebraic form, the proposed ASP can achieve improvements on both the charge sharing and bottom-plate-parasitic losses while maintaining th...
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
Chains of recurrences—a method to expedite the evaluation of closed-form functions Chains of Recurrences (CR's) are introduced as an effective method to evaluate functions at regular intervals. Algebraic properties of CR's are examined and an algorithm that constructs a CR for a given function is explained. Finally, an implementation of the method in MAXIMA/Common Lisp is discussed.
The PARSEC benchmark suite: characterization and architectural implications This paper presents and characterizes the Princeton Application Repository for Shared-Memory Computers (PARSEC), a benchmark suite for studies of Chip-Multiprocessors (CMPs). Previous available benchmarks for multiprocessors have focused on high-performance computing applications and used a limited number of synchronization methods. PARSEC includes emerging applications in recognition, mining and synthesis (RMS) as well as systems applications which mimic large-scale multithreaded commercial programs. Our characterization shows that the benchmark suite covers a wide spectrum of working sets, locality, data sharing, synchronization and off-chip traffic. The benchmark suite has been made available to the public.
Side-Channel Leaks in Web Applications: A Reality Today, a Challenge Tomorrow With software-as-a-service becoming mainstream, more and more applications are delivered to the client through the Web. Unlike a desktop application, a web application is split into browser-side and server-side components. A subset of the application’s internal information flows are inevitably exposed on the network. We show that despite encryption, such a side-channel information leak is a realistic and serious threat to user privacy. Specifically, we found that surprisingly detailed sensitive information is being leaked out from a number of high-profile, top-of-the-line web applications in healthcare, taxation, investment and web search: an eavesdropper can infer the illnesses/medications/surgeries of the user, her family income and investment secrets, despite HTTPS protection; a stranger on the street can glean enterprise employees' web search queries, despite WPA/WPA2 Wi-Fi encryption. More importantly, the root causes of the problem are some fundamental characteristics of web applications: stateful communication, low entropy input for better interaction, and significant traffic distinctions. As a result, the scope of the problem seems industry-wide. We further present a concrete analysis to demonstrate the challenges of mitigating such a threat, which points to the necessity of a disciplined engineering practice for side-channel mitigations in future web application developments.
Linear Amplification with Nonlinear Components A technique for producing bandpass linear amplification with nonlinear components (LINC) is described. The bandpass signal first is separated into two constant envelope component signals. All of the amplitude and phase information of the original bandpass signal is contained in phase modulation on the component signals. These constant envelope signals can be amplified or translated in frequency by amplifiers or mixers which have nonlinear input-output amplitude transfer characteristics. Passive linear combining of the amplified and/or translated component signals produces an amplified and/or translated replica of the original signal.
Sensor network gossiping or how to break the broadcast lower bound Gossiping is an important problem in Radio Networks that has been well studied, leading to many important results. Due to strong resouce limitations of sensor nodes, previous solutions are frequently not feasible in Sensor Networks. In this paper, we study the gossiping problem in the restrictive context of Sensor Networks. By exploiting the geometry of sensor node distributions, we present reduced, optimal running time of O(D + Δ) for an algorithm that completes gossiping with high probability in a Sensor Network of unknown topology and adversarial wake-up, where D is the diameter and Δ the maximum degree of the network. Given that an algorithm for gossiping also solves the broadcast problem, our result proves that the classic lower bound of [16] can be broken if nodes are allowed to do preprocessing.
Towards elastic SDR architectures using dynamic task management. SDR platforms integrating several types and numbers of processing elements in System-on-Chips become an attractive solution for baseband processing in wireless systems. In order to cope with the diversity of protocol applications and the heterogeneity of multi-core architectures, a hierarchical approach for workload distribution is proposed in this paper. Specifically, a system-level scheduler is employed to map applications to multiple processing clusters, complemented with a cluster-level scheduler - the CoreManager - for dynamic resource allocation and configuration as well as for task and data scheduling. A performance analysis of the proposed approach is presented, which shows the advantages of dynamic scheduling against a static approach for variable workloads in the LTE-Advanced uplink multi-user scenarios.
A Sub- $\mu$ W Reconfigurable Front-End for Invasive Neural Recording That Exploits the Spectral Characteristics of the Wideband Neural Signal This paper presents a sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> ac-coupled reconfigurable front-end for invasive wideband neural signal recording. The proposed topology embeds filtering capabilities enabling the selection of different frequency bands inside the neural signal spectrum. Power consumption is optimized by defining specific noise targets for each sub-band. These targets take into account the spectral characteristics of wideband neural signals: local field potentials (LFP) exhibit <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mathrm {1/f^{x}}$ </tex-math></inline-formula> magnitude scaling while action potentials (AP) show uniform magnitude across frequency. Additionally, noise targets also consider electrode noise and the spectral distribution of noise sources in the circuit. An experimentally verified prototype designed in a standard 180 nm CMOS process draws 815 nW from a 1 V supply. The front-end is able to select among four different frequency bands (modes) up to 5 kHz. The measured input-referred spot-noise at 500 Hz in the LFP mode (1 Hz - 700 Hz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$55~nV/\sqrt {Hz}$ </tex-math></inline-formula> while the integrated noise in the AP mode (200 Hz - 5 kHz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$4.1~\mu Vrms$ </tex-math></inline-formula> . The proposed front-end achieves sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> operation without penalizing other specifications such as input swing, common-mode or power-supply rejection ratios. It reduces the power consumption of neural front-ends with spectral selectivity by <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$6.1\times $ </tex-math></inline-formula> and, compared with conventional wideband front-ends, it obtains a reduction of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.5\times $ </tex-math></inline-formula> .
1.2
0.2
0.2
0.2
0.2
0.033333
0
0
0
0
0
0
0
0
A meter-range UWB transceiver chipset for around-the-head audio streaming Any around-the-body wireless system faces challenging requirements. This is especially true in the case of audio streaming around the head e.g. for wireless audio headsets or hearing-aid devices. The behind-the-ear device typically serves multiple radio links e.g. ear-to-ear, ear-to-pocket (a phone or MP3 player) or even a link between the ear and a remote base station such as a TV. Good audio quality is a prerequisite and mW-range power consumption is compulsory in view of battery size. However, the GHz communication channel typically shows a significant attenuation; for an ear-to-ear link, the attenuation due to the narrowband fade dominates and is in the order of 55 to 65dB [1]. The typically small antennas, close to the human body, add another 10 to 15dB of losses. For the ear-to-pocket and the ear-to-remote link, the losses due to body proximity and antenna size reduce, however the distance increases resulting in a similar link budget requirement of 80dB.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A CMOS pressure sensor tag chip for passive wireless applications. This paper presents a novel monolithic pressure sensor tag for passive wireless applications. The proposed pressure sensor tag is based on an ultra-high frequency RFID system. The pressure sensor element is implemented in the 0.18 mu m CMOS process and the membrane gap is formed by sacrificial layer release, resulting in a sensitivity of 1.2 fF/kPa within the range from 0 to 600 kPa. A three-stage rectifier adopts a chain of auxiliary floating rectifier cells to boost the gate voltage of the switching transistors, resulting in a power conversion efficiency of 53% at the low input power of -20 dBm. The capacitive sensor interface, using phase-locked loop archietcture, employs fully-digital blocks, which results in a 7.4 bits resolution and 0.8 mu W power dissipation at 0.8 V supply voltage. The proposed passive wireless pressure sensor tag costs a total 3.2 mu W power dissipation.
A Low Cost UHF RFID System with OCA tag for Short Range Communication A UHF RF identification system, consisting of a fully integrated tag and a special reader, has been developed for short range and harsh size requirement applications. The system is fabricated in the standard 0.18μm CMOS process. The whole tag chip with antenna, analog front-end, baseband and memory, takes up an area of 0.36 mm2, which is smaller than other reported tags with on-chip antenna (OCA) using standard CMOS process. Self-defined protocol is proposed to reduce the power consumption, and minimize the size of the tag. The specialized SOC reader system, consisting of RF transceiver, digital baseband, MCU and host interface, supports both self-defined and ISO 18000-6C protocol. Its power consumption is about 500mW, which is much lower than other reported readers considering the transmission power. Measurement results show that the system’s reading range is 2mm with 20dBm reader output power. Also, it has been verified that the data stored in the OCA tag embedded in a pearl can be successfully read out. With inductive antenna printed on paper substrate around the OCA tag, the reading range can be extended from several centimeters to meters depended on the shape and size of the inductive antenna.
An Ultralow-Power Wake-Up Receiver Based on Direct Active RF Detection. An ultralow-power direct active RF detection wake-up receiver (WuRx) is presented. In order to reduce the power consumption and system complexity, a differential RF envelope detector is implemented in a complementary current-reuse architecture. The detector sensitivity is enhanced through an embedded matching network with signal passive amplification. A prototype receiver is fabricated in 0.18-μm ...
0.56 V, –20 dBm RF-Powered, Multi-Node Wireless Body Area Network System-on-a-Chip With Harvesting-Efficiency Tracking Loop A battery-less, multi-node wireless body area network (WBAN) system-on-a-chip (SoC) is demonstrated. An efficiency tracking loop is proposed that adjusts the rectifier's threshold voltage to maximize the wireless harvesting operation, resulting in a minimum RF sensitivity better than -20 dBm at 904.5 MHz. Each SoC node is injection-locked and time-synchronized with the broadcasted RF basestation power (up to a sensitivity of -33 dBm) using an injection-locked frequency divider (ILFD). Hence, every sensor node is phase-locked with the basestation and all nodes can wirelessly transmit TDMA sensor data concurrently. Designed in a 65 nm-CMOS process, the fabricated sensor SoC contains the energy harvesting rectifier and bandgap, duty-cycled ADC, digital logic, as well as the multi-node wireless clock synchronization and MICS-band transmitter. For a broadcasted basestation power of 20 dBm (30 dBm), experimental measurements verify correct powering, sensor reading, and wireless data transfer for a distance of 3 m (9 m). The entire biomedical system application is verified by reception of room and abdominal temperature monitoring.
High-Efficiency Differential-Drive CMOS Rectifier for UHF RFIDs A high-efficiency CMOS rectifier circuit for UHF RFIDs was developed. The rectifier has a cross-coupled bridge configuration and is driven by a differential RF input. A differential-drive active gate bias mechanism simultaneously enables both low ON-resistance and small reverse leakage of diode-connected MOS transistors, resulting in large power conversion efficiency (PCE), especially under small ...
The GPU Computing Era GPU computing is at a tipping point, becoming more widely used in demanding consumer applications and high-performance computing. This article describes the rapid evolution of GPU architectures—from graphics processors to massively parallel many-core multiprocessors, recent developments in GPU computing architectures, and how the enthusiastic adoption of CPU+GPU coprocessing is accelerating parallel applications.
Searching in an unknown environment: an optimal randomized algorithm for the cow-path problem Searching for a goal is a central and extensively studied problem in computer science. In classical searching problems, the cost of a search function is simply the number of queries made to an oracle that knows the position of the goal. In many robotics problems, as well as in problems from other areas, we want to charge a cost proportional to the distance between queries (e.g., the time required to travel between two query points). With this cost function in mind, the abstract problem known as the w -lane cow-path problem was designed. There are known optimal deterministic algorithms for the cow-path problem; we give the first randomized algorithm in this paper. We show that our algorithm is optimal for two paths ( w =2) and give evidence that it is optimal for larger values of w . Subsequent to the preliminary version of this paper, Kao et al. ( in “Proceedings, 5th ACM–SIAM Symposium on Discrete Algorithm,” pp. 372–381, 1994) have shown that our algorithm is indeed optimal for all w ⩾2. Our randomized algorithm gives expected performance that is almost twice as good as is possible with a deterministic algorithm. For the performance of our algorithm, we also derive the asymptotic growth with respect to w —despite similar complexity results for related problems, it appears that this growth has never been analyzed.
Pinning adaptive synchronization of a general complex dynamical network There are two challenging fundamental questions in pinning control of complex networks: (i) How many nodes should a network with fixed network structure and coupling strength be pinned to reach network synchronization? (ii) How much coupling strength should a network with fixed network structure and pinning nodes be applied to realize network synchronization? To fix these two questions, we propose a general complex dynamical network model and then further investigate its pinning adaptive synchronization. Based on this model, we attain several novel adaptive synchronization criteria which indeed give the positive answers to these two questions. That is, we provide a simply approximate formula for estimating the detailed number of pinning nodes and the magnitude of the coupling strength for a given general complex dynamical network. Here, the coupling-configuration matrix and the inner-coupling matrix are not necessarily symmetric. Moreover, our pinning adaptive controllers are rather simple compared with some traditional controllers. A Barabási–Albert network example is finally given to show the effectiveness of the proposed synchronization criteria.
Wireless communications in the twenty-first century: a perspective Wireless communications are expected to be the dominant mode of access technology in the next century. Besides voice, a new range of services such as multimedia, high-speed data, etc. are being offered for delivery over wireless networks. Mobility will be seamless, realizing the concept of persons being in contact anywhere, at any time. Two developments are likely to have a substantial impact on t...
A decentralized modular control framework for robust control of FES-activated walker-assisted paraplegic walking using terminal sliding mode and fuzzy logic control. A major challenge to developing functional electrical stimulation (FES) systems for paraplegic walking and widespread acceptance of these systems is the design of a robust control strategy that provides satisfactory tracking performance. The systems need to be robust against time-varying properties of neuromusculoskeletal dynamics, day-to-day variations, subject-to-subject variations, external dis...
Understanding the regenerative comparator circuit The regenerative comparator circuit which lies at the heart of A/D conversion, slicer circuits, and memory sensing, is unstable, time-varying, nonlinear, and with multiple equilibria. That does not mean, as this paper shows, that it cannot be understood with simple equivalent circuits that reveal its dynamics completely, and enable it to be designed to specifications on static and dynamic offset and noise. The analysis is applied to the StrongArm latch.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.2
0.2
0.2
0.2
0.2
0.1
0.02
0
0
0
0
0
0
0
A generalized quadrature bandpass sampling in radio receivers Bandpass sampling (BPS) realizes frequency down-conversion by undersampling. Noise aliasing as the direct consequence of the lower sampling rate causes a performance degradation. In this paper, a generalized quadrature BPS (GQBPS) combined with a filter which performs both reconstruction and bandpass filtering is studied in the frequency domain with respect to both signal reconstruction and noise aliasing reduction. The theoretical analyses show that GQBPS might be a potential way to reduce noise aliasing at the cost of a more complicated reconstruction algorithm.
Generalized Bandpass Sampling with Complex FIR Filtering In this paper, generalized quadrature bandpass sampling (GQBPS) with complex FIR filtering is studied with respect to both noise and jitter. GQBPS is a type of nonuniform sampling, and has been extended to generalized uniform bandpass sampling (GUBPS). It is shown that GQBPS and GUBPS with complex FIR filtering perform both down-conversion and noise aliasing suppression in addition to sampling. Due to the averaging operation of FIR filtering, sampling jitter is also reduced to a certain degree by GQBPS and GUBPS. However, the performance by GQBPS is limited by the maximum effective sampling rate determined by the fixed time resolution of the sampling scheme. In contrast GUBPS avoids this problem
Filtering transformation in Generalized Quadrature Bandpass Sampling.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
Impossibility of distributed consensus with one faulty process The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the “Byzantine Generals” problem.
Threaded code The concept of “threaded code” is presented as an alternative to machine language code. Hardware and software realizations of it are given. In software it is realized as interpretive code not needing an interpreter. Extensions and optimizations are mentioned.
Quick detection of difficult bugs for effective post-silicon validation We present a new technique for systematically creating postsilicon validation tests that quickly detect bugs in processor cores and uncore components (cache controllers, memory controllers, on-chip networks) of multi-core System on Chips (SoCs). Such quick detection is essential because long error detection latency, the time elapsed between the occurrence of an error due to a bug and its manifestation as an observable failure, severely limits the effectiveness of existing post-silicon validation approaches. In addition, we provide a list of realistic bug scenarios abstracted from “difficult” bugs that occurred in commercial multi-core SoCs. Our results for an OpenSPARC T2-like multi-core SoC demonstrate: 1. Error detection latencies of “typical” post-silicon validation tests can be very long, up to billions of clock cycles, especially for bugs in uncore components. 2. Our new technique shortens error detection latencies by several orders of magnitude to only a few hundred cycles for most bug scenarios. 3. Our new technique enables 2-fold increase in bug coverage. An important feature of our technique is its software-only implementation without any hardware modification. Hence, it is readily applicable to existing designs.
Deep Residual Learning for Image Recognition Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers - 8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.
Estimating and sampling graphs with multidimensional random walks Estimating characteristics of large graphs via sampling is a vital part of the study of complex networks. Current sampling methods such as (independent) random vertex and random walks are useful but have drawbacks. Random vertex sampling may require too many resources (time, bandwidth, or money). Random walks, which normally require fewer resources per sample, can suffer from large estimation errors in the presence of disconnected or loosely connected graphs. In this work we propose a new m-dimensional random walk that uses m dependent random walkers. We show that the proposed sampling method, which we call Frontier sampling, exhibits all of the nice sampling properties of a regular random walk. At the same time, our simulations over large real world graphs show that, in the presence of disconnected or loosely connected components, Frontier sampling exhibits lower estimation errors than regular random walks. We also show that Frontier sampling is more suitable than random vertex sampling to sample the tail of the degree distribution of the graph.
Backwards-compatible array bounds checking for C with very low overhead The problem of enforcing correct usage of array and pointer references in C and C++ programs remains unsolved. The approach proposed by Jones and Kelly (extended by Ruwase and Lam) is the only one we know of that does not require significant manual changes to programs, but it has extremely high overheads of 5x-6x and 11x-12x in the two versions. In this paper, we describe a collection of techniques that dramatically reduce the overhead of this approach, by exploiting a fine-grain partitioning of memory called Automatic Pool Allocation. Together, these techniques bring the average overhead checks down to only 12% for a set of benchmarks (but 69% for one case). We show that the memory partitioning is key to bringing down this overhead. We also show that our technique successfully detects all buffer overrun violations in a test suite modeling reported violations in some important real-world programs.
Phoenix: Detecting and Recovering from Permanent Processor Design Bugs with Programmable Hardware Although processor design verification consumes ever-increasing resources, many design defects still slip into production silicon. In a few cases, such bugs have caused expensive chip recalls. To truly improve productivity, hardware bugs should be handled like system software ones, with vendors periodically releasing patches to fix hardware in the field. Based on an analysis of serious design defects in current AMD, Intel, IBM, and Motorola processors, this paper proposes and evaluates Phoenix -- novel field-programmable on-chip hardware that detects and recovers from design defects. Phoenix taps key logic signals and, based on downloaded defect signatures, combines the signals into conditions that flag defects. On defect detection, Phoenix flushes the pipeline and either retries or invokes a customized recovery handler. Phoenix induces negligible slowdown, while adding only 0.05% area and 0.48% wire overheads. Phoenix detects all the serious defects that are triggered by concurrent control signals. Moreover, it recovers from most of them, and simplifies recovery for the rest. Finally, we present an algorithm to automatically size Phoenix for new processors.
Design of ultra-wide-load, high-efficient DC-DC buck converters The paper presents the design of a current-mode control DC-DC buck converter with pulse-width modulation (PWM) mode. The converter achieves a current load ranged from 50 mA to 500 mA over 90% efficiency, and the maximum power efficiency is 95.6%, where the circuit was simulated with the TSMC 0.35 um CMOS process. In order to achieve ultra-wide-load high efficiency, this paper implements with two PMOS transistors as switches. Results show that the converter achieves above 90% efficiency at the range from 30 mA to 1200 mA with a maximum efficiency of 96.36%. Results show that, with the additional switch transistor, the current load range is expanded more than double. With two PMOS transistors, the proposed converter can also achieve 3 different load ranges so that it can be programmed for the applications which are operated at those three different load ranges.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.0975
0.05
0.05
0
0
0
0
0
0
0
0
0
0
0
Finite-time stochastic synchronization of dynamic networks with nonlinear coupling strength via quantized intermittent control This paper considers the finite-time synchronization of dynamic networks with nonlinear coupling strength and stochastic perturbations by using intermittent control. Since it is difficult to estimate the settling-time due to the uncertain intermittent intervals, a novel lemma is established. Then, two quantized intermittent controllers without chattering are designed, which can reduce the control cost and save channel resources. T-S fuzzy method is utilized to deal with the nonlinear coupling strength. By using Lyapunov direct method, sufficient conditions formulated by linear matrix inequalities (LMIs) are derived to ensure that the considered system can realize finite-time synchronization. Moreover, the control gains are designed. Two numerical simulations are given to show the merits of theoretical analysis.
Finite-time stabilization by state feedback control for a class of time-varying nonlinear systems. In this paper, finite-time stabilization is considered for a class of nonlinear systems dominated by a lower-triangular model with a time-varying gain. Based on the finite-time Lyapunov stability theorem and dynamic gain control design approach, state feedback finite-time stabilization controllers are proposed with gains being tuned online by two dynamic equations. Different from many existing finite-time control designs for lower-triangular nonlinear systems, the celebrated backstepping method is not utilized here. It is observed that our design procedure is much simpler, and the resulting control gains are in general not as high as those provided by the backstepping method. A simulation example is given to demonstrate the effectiveness of the proposed design procedure.
Robust stability of hopfield delayed neural networks via an augmented L-K functional. This paper focuses on the issue of robust stability of artificial delayed neural networks. A free-matrix-based inequality strategy is produced by presenting an arrangement of slack variables, which can be optimized by means of existing convex optimization algorithms. To reflect a large portion of the dynamical behaviors of the framework, uncertain parameters are considered. By constructing an augmented Lyapunov functional, sufficient conditions are derived to guarantee that the considered neural systems are completely stable. The conditions are presented in the form of as linear matrix inequalities (LMIs). Finally, numerical cases are given to show the suitability of the results presented.
Finite-time stabilization for a class of nonlinear systems via optimal control. In general, finite-time stabilization techniques can always stabilize a system if control cost is not considered. Considering the fact that control cost is a very important factor in control area, we investigate finite-time stabilization problem for a class of nonlinear systems in this paper, where the control cost can also be reduced. We formulate this problem into an optimal control problem, where the control functions are optimized such that the system can be stabilized with minimum control cost. Then, the control parameterization enhancing transform and the control parameterization method are applied to solve this problem. Two numerical examples are illustrated to show the effectiveness of the proposed method.
A Unified Framework Design for Finite-Time and Fixed-Time Synchronization of Discontinuous Neural Networks. In this article, the problems of finite-time/fixed-time synchronization have been investigated for discontinuous neural networks in the unified framework. To achieve the finite-time/fixed-time synchronization, a novel unified integral sliding-mode manifold is introduced, and corresponding unified control strategies are provided; some criteria are established for selecting suitable parameters for s...
Existence and uniform stability analysis of fractional-order complex-valued neural networks with time delays. This paper deals with the problem of existence and uniform stability analysis of fractional-order complex-valued neural networks with constant time delays. Complex-valued recurrent neural networks is an extension of real-valued recurrent neural networks that includes complex-valued states, connection weights, or activation functions. This paper explains sufficient condition for the existence and uniform stability analysis of such networks. Three numerical simulations are delineated to substantiate the effectiveness of the theoretical results.
Finite-time synchronization of nonidentical BAM discontinuous fuzzy neural networks with delays and impulsive effects via non-chattering quantized control •Two new inequalities are developed to deal with the mismatched coefficients of the fuzzy part.•A simple but robust quantized state feedback controller is designed to overcome the effects of discontinuous activations, time delay, and nonidentical coefficients simultaneously. The designed control schemes do not utilize the sign function and can save channel resources. Moreover, novel non-chattering quantized adaptive controllers are also considered to reduce the control cost.•By utilizing 1-norm analytical technique and comparison system method, the effect of impulses on the FTS is well coped with.•Without utilizing the finite-time stability theorem in [16], several FTS criteria are obtained. Moreover, the settling time is explicitly estimated. Results of this paper can easily be extended to FTS of other classical delayed impulsive NNs with or without nonidentical coefficients.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
Cellular Logic-in-Memory Arrays As a direct consequence of large-scale integration, many advantages in the design, fabrication, testing, and use of digital circuitry can be achieved if the circuits can be arranged in a two-dimensional iterative, or cellular, array of identical elementary networks, or cells. When a small amount of storage is included in each cell, the same array may be regarded either as a logically enhanced memory array, or as a logic array whose elementary gates and connections can be "programmed" to realize a desired logical behavior.
On implementing omega with weak reliability and synchrony assumptions We study the feasibility and cost of implementing Ω---a fundamental failure detector at the core of many algorithms---in systems with weak reliability and synchrony assumptions. Intuitively, Ω allows processes to eventually elect a common leader. We first give an algorithm that implements Ω in a weak system S where processes are synchronous, but: (a) any number of them may crash, and (b) only the output links of an unknown correct process are eventually timely (all other links can be asynchronous and/or lossy). This is in contrast to previous implementations of Ω which assume that a quadratic number of links are eventually timely, or systems that are strong enough to implement the eventually perfect failure detector P. We next show that implementing Ω in S is expensive: even if we want an implementation that tolerates just one process crash, all correct processes (except possibly one) must send messages forever; moreover, a quadratic number of links must carry messages forever. We then show that with a small additional assumption---the existence of some unknown correct process whose asynchronous links are lossy but fair---we can implement Ω efficiently: we give an algorithm for Ω such that eventually only one process (the elected leader) sends messages.
Bandwidth-efficient management of DHT routing tables Today an application developer using a distributed hash table (DHT) with n nodes must choose a DHT protocol from the spectrum between O(1) lookup protocols [9, 18] and O(log n) protocols [20-23, 25, 26]. O(1) protocols achieve low latency lookups on small or low-churn networks because lookups take only a few hops, but incur high maintenance traffic on large or high-churn networks. O(log n) protocols incur less maintenance traffic on large or high-churn networks but require more lookup hops in small networks. Accordion is a new routing protocol that does not force the developer to make this choice: Accordion adjusts itself to provide the best performance across a range of network sizes and churn rates while staying within a bounded bandwidth budget. The key challenges in the design of Accordion are the algorithms that choose the routing table's size and content. Each Accordion node learns of new neighbors opportunistically, in a way that causes the density of its neighbors to be inversely proportional to their distance in ID space from the node. This distribution allows Accordion to vary the table size along a continuum while still guaranteeing at most O(log n) lookup hops. The user-specified bandwidth budget controls the rate at which a node learns about new neighbors. Each node limits its routing table size by evicting neighbors that it judges likely to have failed. High churn (i.e., short node lifetimes) leads to a high eviction rate. The equilibrium between the learning and eviction processes determines the table size. Simulations show that Accordion maintains an efficient lookup latency versus bandwidth tradeoff over a wider range of operating conditions than existing DHTs.
Analysis and Design of Passive Polyphase Filters Passive RC polyphase filters (PPFs) are analyzed in detail in this paper. First, a method to calculate the output signals of an n-stage PPF is presented. As a result, all relevant properties of PPFs, such as amplitude and phase imbalance and loss, are calculated. The rules for optimal pole frequency planning to maximize the image-reject ratio provided by a PPF are given. The loss of PPF is divided into two factors, namely the intrinsic loss caused by the PPF itself and the loss caused by termination impedances. Termination impedances known a priori can be used to derive such component values, which minimize the overall loss. The effect of parasitic capacitance and component value deviation are analyzed and discussed. The method of feeding the input signal to the first PPF stage affects the mechanisms of the whole PPF. As a result, two slightly different PPF topologies can be distinguished, and they are separately analyzed and compared throughout this paper. A design example is given to demonstrate the developed design procedure.
High Frequency Buck Converter Design Using Time-Based Control Techniques Time-based control techniques for the design of high switching frequency buck converters are presented. Using time as the processing variable, the proposed controller operates with CMOS-level digital-like signals but without adding any quantization error. A ring oscillator is used as an integrator in place of conventional opamp-RC or G m-C integrators while a delay line is used to perform voltage to time conversion and to sum time signals. A simple flip-flop generates pulse-width modulated signal from the time-based output of the controller. Hence time-based control eliminates the need for wide bandwidth error amplifier, pulse-width modulator (PWM) in analog controllers or high resolution analog-to-digital converter (ADC) and digital PWM in digital controllers. As a result, it can be implemented in small area and with minimal power. Fabricated in a 180 nm CMOS process, the prototype buck converter occupies an active area of 0.24 mm2, of which the controller occupies only 0.0375 mm2. It operates over a wide range of switching frequencies (10-25 MHz) and regulates output to any desired voltage in the range of 0.6 V to 1.5 V with 1.8 V input voltage. With a 500 mA step in the load current, the settling time is less than 3.5 μs and the measured reference tracking bandwidth is about 1 MHz. Better than 94% peak efficiency is achieved while consuming a quiescent current of only 2 μA/MHz.
Neuropixels Data-Acquisition System: A Scalable Platform for Parallel Recording of 10,000+ Electrophysiological Signals. Although CMOS fabrication has enabled a quick evolution in the design of high-density neural probes and neural-recording chips, the scaling and miniaturization of the complete data-acquisition systems has happened at a slower pace. This is mainly due to the complexity and the many requirements that change depending on the specific experimental settings. In essence, the fundamental challenge of a n...
1.2
0.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
A fundamental control performance limit for a class of positive nonlinear systems. A fundamental performance limit is derived for a class of positive nonlinear systems. The performance limit describes the achievable output response in the presence of a positive disturbance and subject to a sign constraint on the allowable input. An explicit optimal input is derived which minimises the maximum output response whilst ensuring that the minimum output response does not fall below a pre-specified lower bound. The result provides a fundamental performance standard against which all control policies, including closed loop schemes, can be compared. Implications of the result are examined in the context of blood glucose regulation for Type 1 Diabetes.
Network-based static output feedback tracking control for fuzzy-model-based nonlinear systems This paper is concerned with network-based static output feedback tracking control for a class of nonlinear systems that can not be stabilized by a static output feedback controller without a time-delay, but can be stabilized by a delayed static output feedback controller. For such systems, network-induced delay is intentionally introduced in the feedback loop to produce a stable and satisfactory tracking control. The nonlinear network-based control system is represented by an asynchronous T-S fuzzy system with an interval time-varying sawtooth delay due to sample-and-hold behaviors and network-induced delays. A new discontinuous complete Lyapunov-Krasovskii functional, which makes use of the lower bound of network-induced delays, the sawtooth delay and its upper bound, is constructed to derive a delay-dependent criterion on H∞ tracking performance analysis. Since routine relaxation methods in traditional T-S fuzzy systems can not be employed to reduce the conservatism of the stability criterion, a new relaxation method is proposed by using asynchronous constraints on fuzzy membership functions to introduce some free-weighting matrices. Based on the feasibility of the derived criterion, a particle swarm optimization algorithm is presented to search the minimum H∞ tracking performance and static output feedback gains. An illustrative example is provided to show the effectiveness of the proposed method.
Type-2 Fuzzy Sets and Systems: An Overview [corrected reprint] As originally published in the February 2007 issue of IEEE Computational Intelligence Magazine, the above titled paper (ibid., vol. 2, no. 1, pp. 20-29, Feb 07) contained errors in mathematics that were introduced by the publisher. The corrected version is reprinted in its entirety.
A simple graph theoretic characterization of reachability for positive linear systems In this paper we consider discrete-time linear positive systems, that is systems defined by a pair (A,B) of non-negative matrices. We study the reachability of such systems which in this case amounts to the freedom of steering the state in the positive orthant by using non-negative control sequences. This problem was solved recently [Canonical forms for positive discrete-time linear control systems, Linear Algebra Appl., 310 (2000) 49]. However we derive here necessary and sufficient conditions for reachability in a simpler and more compact form. These conditions are expressed in terms of particular paths in the graph which is naturally associated with the system.
Observer-based Fuzzy Adaptive Inverse Optimal Output Feedback Control for Uncertain Nonlinear Systems In this article, an observer-based fuzzy adaptive inverse optimal output feedback control problem is studied for a class of nonlinear systems in strict-feedback form. The considered nonlinear systems contain unknown nonlinear dynamics and their states are not measured directly. Fuzzy logic systems are applied to identify the unknown nonlinear dynamics and an auxiliary nonlinear system is construct...
Fuzzy Secure Control for Nonlinear <inline-formula><tex-math notation="LaTeX">$N$</tex-math></inline-formula>-D Parabolic PDE-ODE Coupled Systems Under Stochastic Deception Attacks This article focuses on the design of fuzzy secure control for a class of coupled systems, which are modeled by a nonlinear <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$N$</tex-math></inline-formula> -dimensional ( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$N$</tex-math></inline-formula> -D) parabolic partial differential equation (PDE) subsystem and an ordinary differential equation (ODE) subsystem. Under stochastic deception attacks, a fuzzy secure control scheme is designed, which is effective to tolerate the attacks and ensure the desired performance for the considered systems. A new fuzzy-dependent Poincare–Wirtinger’s inequality (PWI) is proposed. Compared with the traditional Poincare’s inequality, the fuzzy-dependent PWI is more flexible and less conservative. Meanwhile, an augmented Lyapunov–Krasovskii functional (LKF) is newly constructed, which strengthens the correlations of the PDE subsystem and ODE subsystem. Then, on the ground of the fuzzy-dependent PWI and the augmented LKF, new exponential stabilization criteria are set up for the PDE-ODE coupled systems. Finally, a hypersonic rocket car is presented to verify the effectiveness and less conservatism of the obtained results.
On Linear Copositive Lyapunov Functions and the Stability of Switched Positive Linear Systems We consider the problem of common linear copositive function existence for positive switched linear systems. In particular, we present a necessary and sufficient condition for the existence of such a function for switched systems with two constituent linear time-invariant (LTI) systems. A number of applications of this result are also given.
Evolutionary Fuzzy Control and Navigation for Two Wheeled Robots Cooperatively Carrying an Object in Unknown Environments This paper presents a method that allows two wheeled, mobile robots to navigate unknown environments while cooperatively carrying an object. In the navigation method, a leader robot and a follower robot cooperatively perform either obstacle boundary following (OBF) or target seeking (TS) to reach a destination. The two robots are controlled by fuzzy controllers (FC) whose rules are learned through an adaptive fusion of continuous ant colony optimization and particle swarm optimization (AF-CACPSO), which avoids the time-consuming task of manually designing the controllers. The AF-CACPSO-based evolutionary fuzzy control approach is first applied to the control of a single robot to perform OBF. The learning approach is then applied to achieve cooperative OBF with two robots, where an auxiliary FC designed with the AF-CACPSO is used to control the follower robot. For cooperative TS, a rule for coordination of the two robots is developed. To navigate cooperatively, a cooperative behavior supervisor is introduced to select between cooperative OBF and cooperative TS. The performance of the AF-CACPSO is verified through comparisons with various population-based optimization algorithms for the OBF learning problem. Simulations and experiments verify the effectiveness of the approach for cooperative navigation of two robots.
Directed diffusion: a scalable and robust communication paradigm for sensor networks Advances in processor, memory and radio technology will enable small and cheap nodes capable of sensing, communication and computation. Networks of such nodes can coordinate to perform distributed sensing of environmental phenomena. In this paper, we explore the directed diffusion paradigm for such coordination. Directed diffusion is datacentric in that all communication is for named data. All nodes in a directed diffusion-based network are application-aware. This enables diffusion to achieve energy savings by selecting empirically good paths and by caching and processing data in-network. We explore and evaluate the use of directed diffusion for a simple remote-surveillance sensor network.
A general theory of phase noise in electrical oscillators A general model is introduced which is capable of making accurate, quantitative predictions about the phase noise of different types of electrical oscillators by acknowledging the true periodically time-varying nature of all oscillators. This new approach also elucidates several previously unknown design criteria for reducing close-in phase noise by identifying the mechanisms by which intrinsic de...
A new class of asynchronous A/D converters based on time quantization This work is a contribution to a drastic change in standard signal processing chains. The main objective is to reduce the power consumption by one or two orders of magnitude. Integrated Smart Devices and Communicating Objects are application domains targeted by this work. In this context, we present a new class of Analog-to-Digital Converters (ADCs), based on an irregular sampling of the analog signal, and an asynchronous design. Because they are not conventional, a complete design methodology is presented. It determines their characteristics given the required effective number of bits and the analog signal properties. it is shown that our approach leads to a significant reduction in terms of hardware complexity and power consumption. A prototype has been designed for speech applications, using the STMicroelectronics 0.18-μm CMOS technology. Electrical simulations prove that the factor of merit is increased by more than one order of magnitude compared to synchronous Nyquist ADCs.
Observability, Reconstructibility and State Observers of Boolean Control Networks The aim of this paper is to introduce and characterize observability and reconstructibility properties for Boolean networks and Boolean control networks, described according to the algebraic approach proposed by D. Cheng and co-authors in the series of papers [3], [6], [7] and in the recent monography . A complete characterization of these properties, based both on the Boolean matrices involved in the network description and on the corresponding digraphs, is provided. Finally, the problem of state observer design for reconstructible BNs and BCNs is addressed, and two different solutions are proposed.
GP-SIMD Processing-in-Memory GP-SIMD, a novel hybrid general-purpose SIMD computer architecture, resolves the issue of data synchronization by in-memory computing through combining data storage and massively parallel processing. GP-SIMD employs a two-dimensional access memory with modified SRAM storage cells and a bit-serial processing unit per each memory row. An analytic performance model of the GP-SIMD architecture is presented, comparing it to associative processor and to conventional SIMD architectures. Cycle-accurate simulation of four workloads supports the analytical comparison. Assuming a moderate die area, GP-SIMD architecture outperforms both the associative processor and conventional SIMD coprocessor architectures by almost an order of magnitude while consuming less power.
A Bidirectional Neural Interface IC With Chopper Stabilized BioADC Array and Charge Balanced Stimulator. We present a bidirectional neural interface with a 4-channel biopotential analog-to-digital converter (bioADC) and a 4-channel current-mode stimulator in 180 nm CMOS. The bioADC directly transduces microvolt biopotentials into a digital representation without a voltage-amplification stage. Each bioADC channel comprises a continuous-time first-order ΔΣ modulator with a chopper-stabilized OTA input ...
1.1056
0.1056
0.1
0.1
0.1
0.1
0.036667
0.01
0
0
0
0
0
0