Query Text
stringlengths
10
40.4k
Ranking 1
stringlengths
12
40.4k
Ranking 2
stringlengths
12
36.2k
Ranking 3
stringlengths
10
36.2k
Ranking 4
stringlengths
13
40.4k
Ranking 5
stringlengths
12
36.2k
Ranking 6
stringlengths
13
36.2k
Ranking 7
stringlengths
10
40.4k
Ranking 8
stringlengths
12
36.2k
Ranking 9
stringlengths
12
36.2k
Ranking 10
stringlengths
12
36.2k
Ranking 11
stringlengths
20
6.21k
Ranking 12
stringlengths
14
8.24k
Ranking 13
stringlengths
28
4.03k
score_0
float64
1
1.25
score_1
float64
0
0.25
score_2
float64
0
0.25
score_3
float64
0
0.25
score_4
float64
0
0.25
score_5
float64
0
0.25
score_6
float64
0
0.25
score_7
float64
0
0.24
score_8
float64
0
0.2
score_9
float64
0
0.03
score_10
float64
0
0
score_11
float64
0
0
score_12
float64
0
0
score_13
float64
0
0
A 1.2A buck-boost LED driver with 13% efficiency improvement using error-averaged SenseFET-based current sensing.
An Integrated Speed- and Accuracy-Enhanced CMOS Current Sensor With Dynamically Biased Shunt Feedback for Current-Mode Buck Regulators This paper presents a new compact on-chip current-sensing circuit to enable current-mode buck regulators operating at a high switching frequency for reducing the inductor profile. A dynamically biased shunt feedback technique is developed in the proposed current sensor to push nondominant poles to higher frequencies, thereby improving the speed and stability of the current sensor under a wide range of load currents. A feedforward gain stage in the proposed current sensor also increases the dc loop-gain magnitude and thus enhances the accuracy of the current sensing. A current-mode buck regulator with the proposed current sensor has been implemented in a standard 0.35-μm CMOS process. Measurement results show that the proposed current sensor can achieve 95% sensing accuracy and <;; 50-ns settling time. The buck converter can thus operate properly at the switching frequency of 2.5 MHz with the duty cycle down to 0.3. The output ripple voltage of the regulator is <;; 43 mV with a 4.7-μF off-chip capacitor and a 2.2-μH off-chip inductor. The power efficiency of the buck regulator achieves above 80% over the load current ranging from 25 to 500 mA.
A 5-MHz 91% peak-power-efficiency buck regulator with auto-selectable peak- and valley-current control This paper presents a multi-MHz buck regulator for portable applications using an auto-selectable peak- and valley-current control (ASPVCC) scheme. The proposed ASPVCC scheme and the dynamically-biased shunt feedback in the current sensors relax the settling-time requirement of the current sensing and improve the sensing speed. The proposed converter can thus operate at high switching frequencies with a wide range of duty ratios for reducing the required inductance. Implemented in a 0.35-μm CMOS process, the proposed buck converter can operate at 5-MHz with a duty-ratio range of 0.6, use a small-value off-chip inductor of 1 μH, and achieve 91% peak power efficiency.
A Monolithic Buck Converter With Near-Optimum Reference Tracking Response Using Adaptive-Output-Feedback A monolithic output-ripple-based buck converter with adaptive output and ultra-fast reference tracking is presented. Fixed-switching-frequency V2-control is used in steady-state operation; while its speed limitation during reference tracking is eliminated by employing end-point prediction, a novel oscillator with clock-holding function, and the proposed adaptive-output-feedback (AOFB)-scheme. The ...
Circuit techniques for reducing the effects of op-amp imperfections: autozeroing, correlated double sampling, and chopper stabilization In linear IC&#39;s fabricated in a low-voltage CMOS technology, the reduction of the dynamic range due to the dc offset and low frequency noise of the amplifiers becomes increasingly significant. Also, the achievable amplifier gain is often quite low in such a technology, since cascoding may not be a practical circuit option due to the resulting reduction of the output signal swing. In this paper, som...
Impossibility of distributed consensus with one faulty process The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the “Byzantine Generals” problem.
Ad-hoc On-Demand Distance Vector Routing This paper describes work carried out as part of the GUIDE project at Lancaster University. The overall aim of the project is to develop a context-sensitive tourist guide for visitors to the city of Lancaster. Visitors are equipped with portable GUIDE ...
A Linear Representation of Dynamics of Boolean Networks A new matrix product, called semi-tensor product of matrices, is reviewed. Using it, a matrix expression of logic is proposed, where a logical variable is expressed as a vector, a logical function is expressed as a multiple linear mapping. Under this framework, a Boolean network equation is converted into an equivalent algebraic form as a conventional discrete-time linear system. Analyzing the transition matrix of the linear system, formulas are obtained to show a) the number of fixed points; b) the numbers of cycles of different lengths; c) transient period, for all points to enter the set of attractors; and d) basin of each attractor. The corresponding algorithms are developed and used to some examples.
The geometry of innocent flesh on the bone: return-into-libc without function calls (on the x86) We present new techniques that allow a return-into-libc attack to be mounted on x86 executables that calls no functions at all. Our attack combines a large number of short instruction sequences to build gadgets that allow arbitrary computation. We show how to discover such instruction sequences by means of static analysis. We make use, in an essential way, of the properties of the x86 instruction set.
A world survey of artificial brain projects, Part I: Large-scale brain simulations Driven by rapid ongoing advances in computer hardware, neuroscience and computer science, Artificial Brain research and development are blossoming. This article constitutes the first half of a two-part world survey of artificial brain projects: this part dealing with large-scale brain simulations, and the second part with biologically inspired cognitive architectures (BICAs). The large-scale brain simulations we consider in depth here include those by Markram, Modha, Boahen, Horwitz, Edelman, Izhikevich, and Just. As well as reviewing the particulars of these simulation projects, we position them in a broader perspective, comparing at the different underlying definitions of the concept of ''simulation,'' noting that in many ways the projects are modeling neurosystems at different levels as well as using different methodologies.
MicroGP—An Evolutionary Assembly Program Generator This paper describes 驴GP, an evolutionary approach for generating assembly programs tuned for a specific microprocessor. The approach is based on three clearly separated blocks: an evolutionary core, an instruction library and an external evaluator. The evolutionary core conducts adaptive population-based search. The instruction library is used to map individuals to valid assembly language programs. The external evaluator simulates the assembly program, providing the necessary feedback to the evolutionary core. 驴GP has some distinctive features that allow its use in specific contexts. This paper focuses on one such context: test program generation for design validation of microprocessors. Reported results show 驴GP being used to validate a complex 5-stage pipelined microprocessor. Its induced test programs outperform an exhaustive functional test and an instruction randomizer, showing that engineers are able to automatically obtain high-quality test programs.
IEEE 802.11 wireless LAN implemented on software defined radio with hybrid programmable architecture This paper describes a prototype software defined radio (SDR) transceiver on a distributed and heterogeneous hybrid programmable architecture; it consists of a central processing unit (CPU), digital signal processors (DSPs), and pre/postprocessors (PPPs), and supports both Personal Handy Phone System (PHS), and IEEE 802.11 wireless local area network (WLAN). It also supports system switching between PHS and WLAN and over-the-air (OTA) software downloading. In this paper, we design an IEEE 802.11 WLAN around the SDR; we show the software architecture of the SDR prototype and describe how it handles the IEEE 802.11 WLAN protocol. The medium access control (MAC) sublayer functions are executed on the CPU, while the physical layer (PHY) functions such as modulation/demodulation are processed by the DSPs; higher speed digital signal processes are run on the PPP implemented on a field-programmable gate array (FPGA). The most difficult problem in implementing the WLAN in this way is meeting the short interframe space (SIFS) requirement of the IEEE 802.11 standard; we elucidate the potential weakness of the current configuration and specify a way of implementing the IEEE 802.11 protocol that avoids this problem. This paper also describes an experimental evaluation of the prototype for WLAN use, the results of which agree well with computer-simulation results.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.24
0.027009
0.018462
0.005
0.001091
0
0
0
0
0
0
0
0
0
Dynamic spectrum allocation in composite reconfigurable wireless networks Future wireless systems are expected to be characterized by increasing convergence between networks and further development of reconfigurable radio systems. In parallel with this, demand for radio spectrum from these systems will increase, as users take advantage of high quality multimedia services. This article aims to investigate and review the possibilities for the dynamic allocation of spectrum to different radio networks operating in a composite reconfigurable wireless system. The article first looks into the current interest of regulators in this area, before describing some possible schemes to implement dynamic spectrum allocation and showing some example performance results. Following this, the technical requirements that a DSA system would have, in terms of reconfigurable system implementation, are discussed.
Computing Resource Management for SDR Platforms.
Dynamic spectrum access in open spectrum wireless networks One of the reasons for the limitation of bandwidth in current generation wireless networks is the spectrum policy of the Federal Communications Commission (FCC). But, with the spectrum policy reform, open spectrum wireless networks, and spectrum agile radios are set to drive next general wireless networks. In this paper, we investigate continuous-time Markov models for dynamic spectrum access in open spectrum wireless networks. Both queueing and no queueing cases are considered. Analytical results are derived based on the Markov models. A random access protocol is proposed that is shown to achieve airtime fairness. A distributed version of this protocol that uses only local information is also proposed based on homo egualis anthropological model. Inequality aversion by the radio systems to achieve fairness is captured by this model. These protocols are then extended to spectrum agile radios. Extensive simulation results are presented to compare the performances of fixed versus agile radios.
The software radio concept Since early 1980 an exponential blowup of cellular mobile systems has been observed, which has produced, all over the world, the definition of a plethora of analog and digital standards. In 2000 the industrial competition between Asia, Europe, and America promises a very difficult path toward the definition of a unique standard for future mobile systems, although market analyses underline the trading benefits of a common worldwide standard. It is therefore in this field that the software radio concept is emerging as a potential pragmatic solution: a software implementation of the user terminal able to dynamically adapt to the radio environment in which it is, time by time, located. In fact, the term software radio stands for radio functionalities defined by software, meaning the possibility to define by software the typical functionality of a radio interface, usually implemented in TX and RX equipment by dedicated hardware. The presence of the software defining the radio interface necessarily implies the use of DSPs to replace dedicated hardware, to execute, in real time, the necessary software. In this article objectives, advantages, problem areas, and technological challenges of software radio are addressed. In particular, SW radio transceiver architecture, possible SW implementation, and its download are analyzed
The part-time parliament Recent archaeological discoveries on the island of Paxos reveal that the parliament functioned despite the peripatetic propensity of its part-time legislators. The legislators maintained consistent copies of the parliamentary record, despite their frequent forays from the chamber and the forgetfulness of their messengers. The Paxon parliament's protocol provides a new way of implementing the state machine approach to the design of distributed systems.
A Bayesian Method for the Induction of Probabilistic Networks from Data This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction of probabilistic expert systems. We extend the basic method to handle missing data and hidden (latent) variables. We show how to perform probabilistic inference by averaging over the inferences of multiple belief networks. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. Finally, we relate the methods in this paper to previous work, and we discuss open problems.
Achievable rates in cognitive radio channels Cognitive radio promises a low-cost, highly flexible alternative to the classic single-frequency band, single-protocol wireless device. By sensing and adapting to its environment, such a device is able to fill voids in the wireless spectrum and can dramatically increase spectral efficiency. In this paper, the cognitive radio channel is defined as a two-sender, two-receiver interference channel in which sender 2 obtains the encoded message sender 1 plans to transmit. We consider two cases: in the genie-aided cognitive radio channel, sender 2 is noncausally presented the data to be transmitted by sender 1 while in the causal cognitive radio channel, the data is obtained causally. The cognitive radio at sender 2 may then choose to transmit simultaneously over the same channel, as opposed to waiting for an idle channel as is traditional for a cognitive radio. Our main result is the development of an achievable region which combines Gel'fand-Pinkser coding with an achievable region construction for the interference channel. In the additive Gaussian noise case, this resembles dirty-paper coding, a technique used in the computation of the capacity of the Gaussian multiple-input multiple-output (MIMO) broadcast channel. Numerical evaluation of the region in the Gaussian noise case is performed, and compared to an inner bound, the interference channel, and an outer bound, a modified Gaussian MIMO broadcast channel. Results are also extended to the case in which the message is causally obtained.
Gradient-Based Learning Applied to Document Recognition Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper rev...
Understanding Availability This paper addresses a simple, yet fundamental question in the design of peer-to-peer systems: What does it mean when we say "availability" and how does this understand- ing impact the engineering of practical systems? We ar- gue that existing measurements and models do not capture the complex time-varying nature of availability in today's peer-to-peer environments. Further, we show that unfore- seen methodological shortcomings have dramatically biased previous analyses of this phenomenon. As the basis of our study, we empirically characterize the availability of a large peer-to-peer system over a period of 7 days, analyze the de- pendence of the underlying availability distributions, mea- sure host turnover in the system, and discuss how these re- sults may affect the design of high-availability peer-to-peer services.
On the time-complexity of broadcast in multi-hop radio networks: an exponential gap between determinism and randomization The time-complexity of deterministic and randomized protocols for achieving broadcast (distributing a message from a source to all other nodes) in arbitrary multi-hop radio networks is investigated. In many such networks, communication takes place in synchronous time-slots. A processor receives a message at a certain time-slot if exactly one of its neighbors transmits at that time-slot. We assume no collision-detection mechanism; i.e., it is not always possible to distinguish the case where no neighbor transmits from the case where several neighbors transmit simultaneously. We present a randomized protocol that achieves broadcast in time which is optimal up to a logarithmic factor. In particular, with probability 1 --E, the protocol achieves broadcast within O((D + log n/s) 'log n) time-slots, where n is the number of processors in the network and D its diameter. On the other hand, we prove a linear lower bound on the deterministic time-complexity of broadcast in this model. Namely, we show that any deterministic broadcast protocol requires 8(n) time-slots, even if the network has diameter 3, and n is known to all processors. These two results demonstrate an exponential gap in complexity between randomization and determinism.
Gossiping and Broadcasting versus Computing Functions in Networks In the theory of dissemination of information in interconnection networks (gossiping and broadcasting) one assumes that a message consists of a set of distinguishable, atomic pieces of information, and that one communication pattern is used for solving a task. In this paper, a close connection is established between this theory and a situation in which functions are computed in synchronous networks without restrictions on the type of message used and with possibly different communication patterns for different inputs. The following restriction on the way processors communicate turns out to be essential: (*) "Predictable reception": At the beginning of a step a processor knows whether it is to receive a message across one of its links or not. We show that if (*) holds then computing an n-ary function with a "critical input" (e.g., the OR of n bits) and distributing the result to all processors on an n-processor network G takes exactly as long as performing gossiping in G. Further we study the complexity of broadcasting one bit in a synchronous network, assuming that in one step a processor can send only one message, but without assuming (*), and broadcasting one bit on parallel random-access machines (PRAMs) and distributed memory machines (DMMs) with the ARBITRARY access resolution rule.
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.1
0.066667
0.02
0.003636
0
0
0
0
0
0
0
0
0
0
Barrier certificates for nonlinear model validation Methods for model validation of continuous-time nonlinear systems with uncertain parameters are presented in this paper. The methods employ functions of state-parameter-time, termed barrier certificates, whose existence proves that a model and a feasible parameter set are inconsistent with some time-domain experimental data. A very large class of models can be treated within this framework; this includes differential-algebraic models, models with memoryless/dynamic uncertainties, and hybrid models. Construction of barrier certificates can be performed by convex optimization, utilizing recent results on the sum of squares decomposition of multivariate polynomials.
Theory and Applications of Robust Optimization In this paper we survey the primary research, both theoretical and applied, in the area of robust optimization (RO). Our focus is on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying prominent theoretical results of RO, we also present some recent results linking RO to adaptable models for multistage decision-making problems. Finally, we highlight applications of RO across a wide spectrum of domains, including finance, statistics, learning, and various areas of engineering.
Convex Certificates for Model (In)validation of Switched Affine Systems With Unknown Switches Checking validity of a model is a crucial step in the process of system identification. This is especially true when dealing with switched affine systems since, in this case, the problem of system identification from noisy data is known to be generically NP-Hard and can only be solved in practice by using heuristics and relaxations. Therefore, before the identified models can be used for instance for controller design, they should be systematically validated against additional experimental data. In this paper we address the problem of model (in)validation for multi-input multi-output switched affine systems in output error form with unknown switches. As a first step, we prove that necessary and sufficient invalidation certificates can be obtained by solving a sequence of convex optimization problems. In principle, these problems involve increasingly large matrices. However, as we show in the paper by exploiting recent results from semialgebraic geometry, the proposed algorithm is guaranteed to stop after a finite number of steps that can be be explicitly computed from the a priori information. In addition, this algorithm exploits the sparse structure of the underlying optimization problem to substantially reduce the computational burden. The effectiveness of the proposed method is illustrated using both academic examples and a non-trivial problem arising in computer vision: activity monitoring.
Formal Guarantees in Data-Driven Model Identification and Control Synthesis. For many performance-critical control systems, an accurate (simple) model is not available in practice. Thus, designing controllers with formal performance guarantees is challenging. In this paper, we develop a framework to use input-output data from an unknown system to synthesize controllers from signal temporal logic (STL) specifications. First, by imposing mild assumptions on system continuity, we find a set-valued piecewise affine (PWA) model that contains all the possible behaviors of the concrete system. Next, we introduce a novel method for STL control of PWA systems with additive disturbances. By taking advantage of STL quantitative semantics, we provide lower-bound certificates on the degree of STL satisfaction of the closed-loop concrete system. Illustrative examples are presented.
Synthesis for Constrained Nonlinear Systems Using Hybridization and Robust Controllers on Simplices In this technical note, we propose an approach to controller synthesis for a class of constrained nonlinear systems. It is based on the use of a hybridization, that is a hybrid abstraction of the nonlinear dynamics. This abstraction is defined on a triangulation of the state-space where on each simplex of the triangulation, the nonlinear dynamics is conservatively approximated by an affine system subject to disturbances. Except for the disturbances, this hybridization can be seen as a piecewise affine hybrid system on simplices for which appealing control synthesis techniques have been developed in the past decade. We extend these techniques to handle systems subject to disturbances by synthesizing and coordinating local robust affine controllers defined on the simplices of the triangulation. We show that the resulting hybrid controller can be used to control successfully the original constrained nonlinear system. Our approach, though conservative, can be fully automated and is computationally tractable. To show its effectiveness in practical applications, we apply our method to control a pendulum mounted on a cart.
An effective method to interval observer design for time-varying systems. An interval observer for Linear Time-Varying (LTV) systems is proposed in this paper. Usually, the design of such observers is based on monotone systems theory. Monotone properties are hard to satisfy in many situations. To overcome this issue, in a recent work, it has been shown that under some restrictive conditions, the cooperativity of an LTV system can be ensured by a static linear transformation of coordinates. However, a constructive method for the construction of the transformation matrix and the observer gain, making the observation error dynamics positive and stable, is still missing and remains an open problem. In this paper, a constructive approach to obtain a time-varying change of coordinates, ensuring the cooperativity of the observer error in the new coordinates, is provided. The efficiency of the proposed approach is shown through computer simulations.
Chord: a scalable peer-to-peer lookup protocol for internet applications A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.
Time-delay systems: an overview of some recent advances and open problems After presenting some motivations for the study of time-delay system, this paper recalls modifications (models, stability, structure) arising from the presence of the delay phenomenon. A brief overview of some control approaches is then provided, the sliding mode and time-delay controls in particular. Lastly, some open problems are discussed: the constructive use of the delayed inputs, the digital implementation of distributed delays, the control via the delay, and the handling of information related to the delay value.
Approximate counting, uniform generation and rapidly mixing Markov chains The paper studies effective approximate solutions to combinatorial counting and unform generation problems. Using a technique based on the simulation of ergodic Markov chains, it is shown that, for self-reducible structures, almost uniform generation is possible in polynomial time provided only that randomised approximate counting to within some arbitrary polynomial factor is possible in polynomial time. It follows that, for self-reducible structures, polynomial time randomised algorithms for counting to within factors of the form (1 + n − β ) are available either for all β ϵ R or for no β ϵ R . A substantial part of the paper is devoted to investigating the rate of convergence of finite ergodic Markov chains, and a simple but powerful characterisation of rapid convergence for a broad class of chains based on a structural property of the underlying graph is established. Finally, the general techniques of the paper are used to derive an almost uniform generation procedure for labelled graphs with a given degree sequence which is valid over a much wider range of degrees than previous methods: this in turn leads to randomised approximate counting algorithms for these graphs with very good asymptotic behaviour.
An area-efficient multistage 3.0- to 8.5-GHz CMOS UWB LNA using tunable active inductors An area-efficient multistage 3.0- to 8.5-GHz ultra-wideband low-noise amplifier (LNA) utilizing tunable active inductors (AIs) is presented. The AI includes a negative impedance circuit (NIC) consisting of a pair of cross-coupled NMOS transistors and is tuned to vary the gain and bandwidth (BW) of the amplifier. Fabricated in a 90-nm digital CMOS process, the proposed fully on-chip LNA occupies a core chip area of only 0.022 mm2. The measurement results show a power gain S21 of 16.0 dB, a noise figure of 3.1-4.4 dB, and an input return loss S11 of less than -10.5 dB over the 3-dB BW of 3.0-8.5 GHz. Tuning the AIs allows one to increase the gain above 18.0 dB and to extend the BW over 9.4 GHz. The LNA consumes 16.0 mW from a power supply of 1.2 V.
Codejail: Application-Transparent Isolation of Libraries with Tight Program Interactions.
Highly sensitive Hall magnetic sensor microsystem in CMOS technology A highly sensitive magnetic sensor microsystem based on a Hall device is presented. This microsystem consists of a Hall device improved by an integrated magnetic concentrator and new circuit architecture for the signal processing. It provides an amplification of the sensor signal with a resolution better than 30 /spl mu/V and a periodic offset cancellation while the output of the microsystem is av...
3.4 A 36Gb/s PAM4 transmitter using an 8b 18GS/S DAC in 28nm CMOS At data rates beyond 10Gb/s, most wireline links employ NRZ signaling. Serial NRZ links as high as 56Gb/s and 60Gb/s have been reported [1]. Nevertheless, as the rate increases, the constraints imposed by the channel, package, and die become more severe and do not benefit from process scaling in the same fashion that circuit design does. Reflections from impedance discontinuities in the PCB and package caused by vias and connectors introduce significant signal loss and distortions at higher frequencies. Even with an ideal channel, at every package-die interface, there is an intrinsic parasitic capacitance due to the pads and the ESD circuit amounting to at least 150fF, and a 50Ω resistor termination at both the transmit and receive ends resulting in an intrinsic pole at 23GHz or lower. In light of all these limitations, serial NRZ signaling beyond 60Gb/s appears suboptimal in terms of both power and performance. Utilizing various modulation techniques such as PAM4, one can achieve a higher spectral efficiency [2]. To enable such transmission formats, high-speed moderate-resolution data converters are required. This paper describes a 36Gb/s transmitter based on an 18GS/s 8b DAC implemented in 28nm CMOS, compliant to the new IEEE802.3bj standard for 100G Ethernet over backplane and copper cables [3].
Multi-Channel Neural Recording Implants: A Review. The recently growing progress in neuroscience research and relevant achievements, as well as advancements in the fabrication process, have increased the demand for neural interfacing systems. Brain-machine interfaces (BMIs) have been revealed to be a promising method for the diagnosis and treatment of neurological disorders and the restoration of sensory and motor function. Neural recording implants, as a part of BMI, are capable of capturing brain signals, and amplifying, digitizing, and transferring them outside of the body with a transmitter. The main challenges of designing such implants are minimizing power consumption and the silicon area. In this paper, multi-channel neural recording implants are surveyed. After presenting various neural-signal features, we investigate main available neural recording circuit and system architectures. The fundamental blocks of available architectures, such as neural amplifiers, analog to digital converters (ADCs) and compression blocks, are explored. We cover the various topologies of neural amplifiers, provide a comparison, and probe their design challenges. To achieve a relatively high SNR at the output of the neural amplifier, noise reduction techniques are discussed. Also, to transfer neural signals outside of the body, they are digitized using data converters, then in most cases, the data compression is applied to mitigate power consumption. We present the various dedicated ADC structures, as well as an overview of main data compression methods.
1.11
0.1
0.1
0.1
0.06
0.02
0
0
0
0
0
0
0
0
Efficient Data Supply for Parallel Heterogeneous Architectures Decoupling techniques have been proposed to reduce the amount of memory latency exposed to high-performance accelerators as they fetch data. Although decoupled access-execute (DAE) and more recent decoupled data supply approaches offer promising single-threaded performance improvements, little work has considered how to extend them into parallel scenarios. This article explores the opportunities and challenges of designing parallel, high-performance, resource-efficient decoupled data supply systems. We propose Mercury, a parallel decoupled data supply system that utilizes thread-level parallelism for high-throughput data supply with good portability attributes. Additionally, we introduce some microarchitectural improvements for data supply units to efficiently handle long-latency indirect loads.
Architecture Aware Partitioning Algorithms Existing partitioning algorithms provide limited support for load balancing simulations that are performed on heterogeneous parallel computing platforms. On such architectures, effec- tive load balancing can only be achieved if the graph is distributed so that it properly takes into account the available resources (CPU speed, network bandwidth). With heterogeneous tech- nologies becoming more popular, the need for suitable graph partitioning algorithms is criti- cal. We developed such algorithms that can address the partitioning requirements of scientific computations, and can correctly model the architectural characteristics of emerging hardware platforms.
AMD Fusion APU: Llano The Llano variant of the AMD Fusion accelerated processor unit (APU) deploys AMD Turbo CORE technology to maximize processor performance within the system's thermal design limits. Low-power design and performance/watt ratio optimization were key design approaches, and power gating is implemented pervasively across the APU.
Decoupling Data Supply from Computation for Latency-Tolerant Communication in Heterogeneous Architectures. In today’s computers, heterogeneous processing is used to meet performance targets at manageable power. In adopting increased compute specialization, however, the relative amount of time spent on communication increases. System and software optimizations for communication often come at the costs of increased complexity and reduced portability. The Decoupled Supply-Compute (DeSC) approach offers a way to attack communication latency bottlenecks automatically, while maintaining good portability and low complexity. Our work expands prior Decoupled Access Execute techniques with hardware/software specialization. For a range of workloads, DeSC offers roughly 2 × speedup, and additional specialized compression optimizations reduce traffic between decoupled units by 40%.
Tiny but mighty: designing and realizing scalable latency tolerance for manycore SoCs Modern computing systems employ significant heterogeneity and specialization to meet performance targets at manageable power. However, memory latency bottlenecks remain problematic, particularly for sparse neural network and graph analytic applications where indirect memory accesses (IMAs) challenge the memory hierarchy. Decades of prior art have proposed hardware and software mechanisms to mitigate IMA latency, but they fail to analyze real-chip considerations, especially when used in SoCs and manycores. In this paper, we revisit many of these techniques while taking into account manycore integration and verification. We present the first system implementation of latency tolerance hardware that provides significant speedups without requiring any memory hierarchy or processor tile modifications. This is achieved through a Memory Access Parallel-Load Engine (MAPLE), integrated through the Network-on-Chip (NoC) in a scalable manner. Our hardware-software co-design allows programs to perform long-latency memory accesses asynchronously from the core, avoiding pipeline stalls, and enabling greater memory parallelism (MLP). In April 2021 we taped out a manycore chip that includes tens of MAPLE instances for efficient data supply. MAPLE demonstrates a full RTL implementation of out-of-core latency-mitigation hardware, with virtual memory support and automated compilation targetting it. This paper evaluates MAPLE integrated with a dual-core FPGA prototype running applications with full SMP Linux, and demonstrates geomean speedups of 2.35× and 2.27× over software-based prefetching and decoupling, respectively. Compared to state-of-the-art hardware, it provides geomean speedups of 1.82× and 1.72× over prefetching and decoupling techniques.
CHIPKIT: An agile, reusable open-source framework for rapid test chip development The current trend for domain-specific architectures has led to renewed interest in research test chips to demonstrate new specialized hardware. Tapeouts also offer huge pedagogical value garnered from real hands-on exposure to the whole system stack. However, success with tapeouts requires hard-earned experience, and the design process is time consuming and fraught with challenges. Therefore, custom chips have remained the preserve of a small number of research groups, typically focused on circuit design research. This article describes the CHIPKIT framework: a reusable SoC subsystem which provides basic IO, an on-chip programmable host, off-chip hosting, memory, and peripherals. This subsystem can be readily extended with new IP blocks to generate custom test chips. Central to CHIPKIT is an agile RTL development flow, including a code generation tool called VGEN. Finally, we discuss best practices for full-chip validation across the entire design cycle.
Exploring the potential of heterogeneous von neumann/dataflow execution models General purpose processors (GPPs), from small inorder designs to many-issue out-of-order, incur large power overheads which must be addressed for future technology generations. Major sources of overhead include structures which dynamically extract the data-dependence graph or maintain precise state. Considering irregular workloads, current specialization approaches either heavily curtail performance, or provide simply too little benefit. Interestingly, well known explicit-dataflow architectures eliminate these overheads by directly executing the data-dependence graph and eschewing instruction-precise recoverability. However, even after decades of research, dataflow architectures have yet to come into prominence as a solution. We attribute this to a lack of effective control speculation and the latency overhead of explicit communication, which is crippling for certain codes. This paper makes the observation that if both out-of-order and explicit-dataflow were available in one processor, many types of GPP cores can benefit from dynamically switching during certain phases of an application's lifetime. Analysis reveals that an ideal explicit-dataflow engine could be profitable for more than half of instructions, providing significant performance and energy improvements. The challenge is to achieve these benefits without introducing excess hardware complexity. To this end, we propose the Specialization Engine for Explicit-Dataflow (SEED). Integrated with an inorder core, we see 1.67× performance and 1.65× energy benefits, with an Out-Of-Order (OOO) dual-issue core we see 1.33× and 1.70×, and with a quad-issue OOO, 1.14× and 1.54×.
DySER: Unifying Functionality and Parallelism Specialization for Energy-Efficient Computing The DySER (Dynamically Specializing Execution Resources) architecture supports both functionality specialization and parallelism specialization. By dynamically specializing frequently executing regions and applying parallelism mechanisms, DySER provides efficient functionality and parallelism specialization. It outperforms an out-of-order CPU, Streaming SIMD Extensions (SSE) acceleration, and GPU acceleration while consuming less energy. The full-system field-programmable gate array (FPGA) prototype of DySER integrated into OpenSparc demonstrates a practical implementation.
Achievable rates in cognitive radio channels Cognitive radio promises a low-cost, highly flexible alternative to the classic single-frequency band, single-protocol wireless device. By sensing and adapting to its environment, such a device is able to fill voids in the wireless spectrum and can dramatically increase spectral efficiency. In this paper, the cognitive radio channel is defined as a two-sender, two-receiver interference channel in which sender 2 obtains the encoded message sender 1 plans to transmit. We consider two cases: in the genie-aided cognitive radio channel, sender 2 is noncausally presented the data to be transmitted by sender 1 while in the causal cognitive radio channel, the data is obtained causally. The cognitive radio at sender 2 may then choose to transmit simultaneously over the same channel, as opposed to waiting for an idle channel as is traditional for a cognitive radio. Our main result is the development of an achievable region which combines Gel'fand-Pinkser coding with an achievable region construction for the interference channel. In the additive Gaussian noise case, this resembles dirty-paper coding, a technique used in the computation of the capacity of the Gaussian multiple-input multiple-output (MIMO) broadcast channel. Numerical evaluation of the region in the Gaussian noise case is performed, and compared to an inner bound, the interference channel, and an outer bound, a modified Gaussian MIMO broadcast channel. Results are also extended to the case in which the message is causally obtained.
An ultra-wideband CMOS low noise amplifier for 3-5-GHz UWB system An ultra-wideband (UWB) CMOS low noise amplifier (LNA) topology that combines a narrowband LNA with a resistive shunt-feedback is proposed. The resistive shunt-feedback provides wideband input matching with small noise figure (NF) degradation by reducing the Q-factor of the narrowband LNA input and flattens the passband gain. The proposed UWB amplifier is implemented in 0.18-/spl mu/m CMOS technol...
Replica compensated linear regulators for supply-regulated phase-locked loops Supply-regulated phase-locked loops rely upon the VCO voltage regulator to maintain a low sensitivity to supply noise and hence low overall jitter. By analyzing regulator supply rejection, we show that in order to simultaneously meet the bandwidth and low dropout requirements, previous regulator implementations used in supply-regulated PLLs suffer from unfavorable tradeoffs between power supply rejection and power consumption. We therefore propose a compensation technique that places the regulator's amplifier in a local replica feedback loop, stabilizing the regulator by increasing the amplifier bandwidth while lowering its gain. Even though the forward gain of the amplifier is reduced, supply noise affects the replica output in addition to the actual output, and therefore the amplifier's gain to reject supply noise is effectively restored. Analysis shows that for reasonable mismatch between the replica and actual loads, regulator performance is uncompromised, and experimental results from a 90 nm SOI test chip confirm that with the same power consumption, the proposed regulator achieves at least 4 dB higher supply rejection than the previous regulator design. Furthermore, simulations show that if not for other supply rejection-limiting components in the PLL, the supply rejection improvement of the proposed regulator is greater than 15 dB.
All-Digital Background Calibration Technique for Time-Interleaved ADC Using Pseudo Aliasing Signal A new digital background calibration technique for gain mismatches and sample-time mismatches in a Time-Interleaved Analog-to-Digital Converter (TI-ADC) is presented to reduce the circuit area. In the proposed technique, the gain mismatches and the sample-time mismatches are calibrated by using pseudo aliasing signals instead of using a bank of adaptive FIR filters which is conventionally utilized. The pseudo aliasing signals are generated and subtracted from an ADC output. A pseudo aliasing generator consists of the Hadamard transform and a fixed FIR filter. In case of a two-channel 10-bit TI-ADC, the proposed technique reduces the requirement for a word length of the FIR filter by about 50% without a look-up table (LUT) compared with the conventional technique. In addition, the proposed technique requires only one FIR filter compared with the bank of adaptive filters which requires (M-1) FIR filters in an M-channel TI-ADC.
A 1.95 GHz Fully Integrated Envelope Elimination and Restoration CMOS Power Amplifier Using Timing Alignment Technique for WCDMA and LTE A fully integrated envelope elimination and restoration (EER) CMOS power amplifier (PA) has been developed for WCDMA and LTE handsets. EER is a supply modulation technique that first divides modulated RF signal into envelope and phase signals and then restores it at a switching PA output. Supply voltage of the switching PA is modulated by the envelope signal through a high-speed supply modulator. EER PA is highly efficient due to the switching PA and the supply modulation. However, it generally has difficulty, especially for a wide bandwidth baseband application like LTE, achieving a wide bandwidth for phase signal path and highly accurate timing between envelope and phase signals. To overcome these challenges, an envelope/phase generator based on a mixer and a limiter was proposed to generate the wide bandwidth phase signal, and a timing aligner based on a delay locked loop with a variable high-pass filter (HPF) was proposed to compensate for the timing mismatch. The chip was implemented in 90 nm CMOS technology. Measured power-added efficiency (PAE) and adjacent channel leakage ratio (ACLR) were 39% and -41 dBc for WCDMA, and measured PAE and ACLR E-UTRA1 were 32% and -33 dBc for 20 MHz-BW LTE.
A Sub- $\mu$ W Reconfigurable Front-End for Invasive Neural Recording That Exploits the Spectral Characteristics of the Wideband Neural Signal This paper presents a sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> ac-coupled reconfigurable front-end for invasive wideband neural signal recording. The proposed topology embeds filtering capabilities enabling the selection of different frequency bands inside the neural signal spectrum. Power consumption is optimized by defining specific noise targets for each sub-band. These targets take into account the spectral characteristics of wideband neural signals: local field potentials (LFP) exhibit <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mathrm {1/f^{x}}$ </tex-math></inline-formula> magnitude scaling while action potentials (AP) show uniform magnitude across frequency. Additionally, noise targets also consider electrode noise and the spectral distribution of noise sources in the circuit. An experimentally verified prototype designed in a standard 180 nm CMOS process draws 815 nW from a 1 V supply. The front-end is able to select among four different frequency bands (modes) up to 5 kHz. The measured input-referred spot-noise at 500 Hz in the LFP mode (1 Hz - 700 Hz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$55~nV/\sqrt {Hz}$ </tex-math></inline-formula> while the integrated noise in the AP mode (200 Hz - 5 kHz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$4.1~\mu Vrms$ </tex-math></inline-formula> . The proposed front-end achieves sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> operation without penalizing other specifications such as input swing, common-mode or power-supply rejection ratios. It reduces the power consumption of neural front-ends with spectral selectivity by <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$6.1\times $ </tex-math></inline-formula> and, compared with conventional wideband front-ends, it obtains a reduction of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.5\times $ </tex-math></inline-formula> .
1.11
0.1
0.1
0.1
0.1
0.06
0.0208
0.0015
0
0
0
0
0
0
Reliable Next-Generation Cortical Interfaces for Chronic Brain-Machine Interfaces and Neuroscience. This review focuses on recent directions stemming from work by the authors and collaborators in the emerging field of neurotechnology. Neurotechnology has the potential to provide a greater understanding of the structure and function of the complex neural circuits in the brain, as well as impacting the field of brain-machine interfaces (BMI). We envision ultralow-power wireless neural interface sy...
Advanced Biophysical Model to Capture Channel Variability for EQS Capacitive HBC Human Body Communication (HBC) has come up as a promising alternative to traditional radio frequency (RF) Wireless Body Area Network (WBAN) technologies. This is essentially due to HBC providing a broadband communication channel with enhanced signal security in the physical layer due to lower radiation from the human body as compared to its RF counterparts. An in-depth understandi...
Body-Area Powering With Human Body-Coupled Power Transmission and Energy Harvesting ICs This paper presents the body-coupled power transmission and ambient energy harvesting ICs. The ICs utilize human body-coupling to deliver power to the entire body, and at the same time, harvest energy from ambient EM waves coupled through the body. The ICs improve the recovered power level by adapting to the varying skin-electrode interface parasitic impedance at both the TX and RX. To maximize the power output from the TX, the dynamic impedance matching is performed amidst environment-induced variations. At the RX, the Detuned Impedance Booster (DIB) and the Bulk Adaptation Rectifier (BAR) are proposed to improve the power recovery and extend the power coverage further. In order to ensure the maximum power extraction despite the loading variations, the Dual-Mode Buck-Boost Converter (DM-BBC) is proposed. The ICs fabricated in 40 nm 1P8M CMOS recover up to 100 μW from the body-coupled power transmission and 2.5 μW from the ambient body-coupled energy harvesting. The ICs achieve the full-body area power delivery, with the power harvested from the ambiance via the body-coupling mechanism independent of placements on the body. Both approaches show power sustainability for wearable electronics all around the human body.
A 6.5-<italic>μ</italic>W 10-kHz BW 80.4-dB SNDR G<sub>m</sub>-C-Based CT ∆∑ Modulator With a Feedback-Assisted G<sub>m</sub> Linearization for Artifact-Tolerant Neural Recording This article presents a Gm-C-based continuous-time delta-sigma modulator (CTDSM) for artifact-tolerant neural recording interfaces. We propose the feedback-assisted Gm linearization technique, which is applied to the first Gm-C integrator by using a resistive feedback digital-to-analog converter (DAC) in parallel to the degeneration resistor of the input Gm. This enables the input Gm to process the quantization noise, thereby improving the input range and linearity of the Gm-C-based CTDSM, significantly. An energy-efficient second-order loop filter is realized by using a voltage-controlled oscillator (VCO) as the second integrator and a phase quantizer. A proportional-integral (PI) transfer function is employed at the first integrator, which minimizes the output swing while maintaining loop stability. Fabricated in a 110-nm CMOS process, the prototype CTDSM achieves a high input impedance, 300-mVpp linear input range, 80.4-dB signal-to-noise and distortion ratio (SNDR), 81-dB dynamic range (DR), and 76-dB common-mode rejection ratio (CMRR) and consumes only 6.5 μW with a signal bandwidth of 10 kHz. This corresponds to a figure of merit (FoM) of 172.3 dB, which is the state of the art among the neural recording ADCs. This work is also validated through the in vivo experiment.
A 0.025-mm 2 0.8-V 78.5-dB SNDR VCO-Based Sensor Readout Circuit in a Hybrid PLL- $\Delta\Sigma$ M Structure This article presents a capacitively coupled voltage-controlled oscillator (VCO)-based sensor readout featuring a hybrid phase-locked loop (PLL)- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\Delta \Sigma $ </tex-math></inline-formula> modulator structure. It leverages phase-locking and phase-frequency detector (PFD) array to concurrently perform quantization and dynamic element matching (DEM), much-reducing hardware/power compared with the existing VCO-based readouts’ counting scheme. A low-cost in-cell data-weighted averaging (DWA) scheme is presented to enable a highly linear tri-level digital-to-analog converter (DAC). Fabricated in 40-nm CMOS, the prototype readout achieves 78-dB SNDR in 10-kHz bandwidth, consuming 4.68 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> and 0.025-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> active area. With 172-dB Schreier figure of merit, its efficiency advances the state-of-the-art VCO-based readouts by <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$50\times $ </tex-math></inline-formula> .
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Directed diffusion: a scalable and robust communication paradigm for sensor networks Advances in processor, memory and radio technology will enable small and cheap nodes capable of sensing, communication and computation. Networks of such nodes can coordinate to perform distributed sensing of environmental phenomena. In this paper, we explore the directed diffusion paradigm for such coordination. Directed diffusion is datacentric in that all communication is for named data. All nodes in a directed diffusion-based network are application-aware. This enables diffusion to achieve energy savings by selecting empirically good paths and by caching and processing data in-network. We explore and evaluate the use of directed diffusion for a simple remote-surveillance sensor network.
ImageNet Large Scale Visual Recognition Challenge. The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.
DieHard: probabilistic memory safety for unsafe languages Applications written in unsafe languages like C and C++ are vulnerable to memory errors such as buffer overflows, dangling pointers, and reads of uninitialized data. Such errors can lead to program crashes, security vulnerabilities, and unpredictable behavior. We present DieHard, a runtime system that tolerates these errors while probabilistically maintaining soundness. DieHard uses randomization and replication to achieve probabilistic memory safety by approximating an infinite-sized heap. DieHard's memory manager randomizes the location of objects in a heap that is at least twice as large as required. This algorithm prevents heap corruption and provides a probabilistic guarantee of avoiding memory errors. For additional safety, DieHard can operate in a replicated mode where multiple replicas of the same application are run simultaneously. By initializing each replica with a different random seed and requiring agreement on output, the replicated version of Die-Hard increases the likelihood of correct execution because errors are unlikely to have the same effect across all replicas. We present analytical and experimental results that show DieHard's resilience to a wide range of memory errors, including a heap-based buffer overflow in an actual application.
A Clustering Scheme For Hierarchical Control In Mufti-Hop Wireless Networks In this paper we present a clustering scheme to create a hierarchical control structure for mufti-hop wireless networks. A cluster is defined as a subset of vertices, whose induced graph is connected. In addition, a cluster is required to obey certain constraints that are useful for management and scalability of the hierarchy. All these constraints cannot be met simultaneously for general graphs, but we show how such a clustering can be obtained for wireless network topologies. Finally, we present an efficient distributed implementation of our clustering algorithm for a set of wireless nodes to create the set of desired clusters.
A survey of state and disturbance observers for practitioners This paper gives a unified and historical review of observer design for the benefit of practitioners. It is unified in the sense that all observers are examined in terms of: 1) the assumed dynamic structure of the plant; 2) the required information, including the input signals and modeling information of the plant; and 3) the implementation equation of the observer. This allows a practitioner, with a particular observer design problem in mind, to quickly find a suitable solution. The review is historical in the sense that it follows the evolution of ideas in observer design in the last half century. From the distinction in problem formulation, required modeling information and the observer design goal, we can see two schools of thought: one is developed in the framework of modern control theory; the other is based on disturbance estimation, which has been, to some extent, overlooked
Highly sensitive Hall magnetic sensor microsystem in CMOS technology A highly sensitive magnetic sensor microsystem based on a Hall device is presented. This microsystem consists of a Hall device improved by an integrated magnetic concentrator and new circuit architecture for the signal processing. It provides an amplification of the sensor signal with a resolution better than 30 /spl mu/V and a periodic offset cancellation while the output of the microsystem is av...
16.7 A 20V 8.4W 20MHz four-phase GaN DC-DC converter with fully on-chip dual-SR bootstrapped GaN FET driver achieving 4ns constant propagation delay and 1ns switching rise time Recently, the demand for miniaturized and fast transient response power delivery systems has been growing in high-voltage industrial electronics applications. Gallium Nitride (GaN) FETs showing a superior figure of merit (Rds, ON X Qg) in comparison with silicon FETs [1] can enable both high-frequency and high-efficiency operation in these applications, thus making power converters smaller, faster and more efficient. However, the lack of GaN-compatible high-speed gate drivers is a major impediment to fully take advantage of GaN FET-based power converters. Conventional high-voltage gate drivers usually exhibit propagation delay, tdelay, of up to several 10s of ns in the level shifter (LS), which becomes a critical problem as the switching frequency, fsw, reaches the 10MHz regime. Moreover, the switching slew rate (SR) of driving GaN FETs needs particular care in order to maintain efficient and reliable operation. Driving power GaN FETs with a fast SR results in large switching voltage spikes, risking breakdown of low-Vgs GaN devices, while slow SR leads to long switching rise time, tR, which degrades efficiency and limits fsw. In [2], large tdelay and long tR in the GaN FET driver limit its fsw to 1MHz. A design reported in [3] improves tR to 1.2ns, thereby enabling fsw up to 10MHz. However, the unregulated switching dead time, tDT, then becomes a major limitation to further reduction of tde!ay. This results in limited fsw and narrower range of VIN-VO conversion ratio. Interleaved multiphase topologies can be the most effective way to increase system fsw. However, each extra phase requires a capacitor for bootstrapped (BST) gate driving which incurs additional cost and complexity of the PCB design. Moreover, the requirements of fsw synchronization and balanced - urrent sharing for high fsw operation in multiphase implementation are challenging.
Multi-Channel Neural Recording Implants: A Review. The recently growing progress in neuroscience research and relevant achievements, as well as advancements in the fabrication process, have increased the demand for neural interfacing systems. Brain-machine interfaces (BMIs) have been revealed to be a promising method for the diagnosis and treatment of neurological disorders and the restoration of sensory and motor function. Neural recording implants, as a part of BMI, are capable of capturing brain signals, and amplifying, digitizing, and transferring them outside of the body with a transmitter. The main challenges of designing such implants are minimizing power consumption and the silicon area. In this paper, multi-channel neural recording implants are surveyed. After presenting various neural-signal features, we investigate main available neural recording circuit and system architectures. The fundamental blocks of available architectures, such as neural amplifiers, analog to digital converters (ADCs) and compression blocks, are explored. We cover the various topologies of neural amplifiers, provide a comparison, and probe their design challenges. To achieve a relatively high SNR at the output of the neural amplifier, noise reduction techniques are discussed. Also, to transfer neural signals outside of the body, they are digitized using data converters, then in most cases, the data compression is applied to mitigate power consumption. We present the various dedicated ADC structures, as well as an overview of main data compression methods.
1.2
0.2
0.1
0.066667
0.033333
0
0
0
0
0
0
0
0
0
Leveraging resource management for efficient performance of Apache Spark Apache Spark is one of the most widely used open source processing framework for big data, it allows to process large datasets in parallel using a large number of nodes. Often, applications of this framework use resource management systems like YARN, which provide jobs a specific amount of resources for their execution. In addition, a distributed file system such as HDFS stores the data that is to be analyzed by the framework. This design allows sharing cluster resources effectively by running jobs on a single-node cluster or multi-nodes cluster infrastructure. Thus, one challenging issue is to realize effective resource management of these large cluster infrastructures in order to run distributed data analytics in an economically viable way. In this study, we use the Machine Learning library (MLlib) of Spark to implement different machine learning algorithms, then we manage the resources (CPU, memory, and Disk) in order to assess the performance of Apache Spark. In this paper, we present a review of various works that focus on resource management and data processing in Big Data platforms. Furthermore, we perform a scalability analysis using Spark. We analyze the speedup and processing time. We deduce that from a certain number of nodes in the cluster, it is no longer necessary to add additional nodes to improve the speedup and the processing Time. Then, we investigate the tuning of the resource allocation in Spark. We showed that it is not only by allocating all the available resources we get better performance but it depends on how to tune the resource allocation. We propose new managed parameters and we show that they give better total processing time than the default parameters used by Spark. Finally, we study the Persistence of Resilient Distributed Datasets (RDDs) in Spark using machine learning algorithms. We show that one storage level gives the best execution time among all tested storage levels.
Planning as heuristic search In the AIPS98 Planning Contest, the hsp planner showed that heuristic search planners can be competitive with state-of-the-art Graphplan and sat planners. Heuristic search planners like hsp transform planning problems into problems of heuristic search by automatically extracting heuristics from Strips encodings. They differ from specialized problem solvers such as those developed for the 24-Puzzle and Rubik's Cube in that they use a general declarative language for stating problems and a general mechanism for extracting heuristics from these representations. In this paper, we study a family of heuristic search planners that are based on a simple and general heuristic that assumes that action preconditions are independent. The heuristic is then used in the context of best-first and hill-climbing search algorithms, and is tested over a large collection of domains. We then consider variations and extensions such as reversing the direction of the search for speeding node evaluation, and extracting information about propositional invariants for avoiding dead-ends. We analyze the resulting planners, evaluate their performance, and explain when they do best. We also compare the performance of these planners with two state-of-the-art planners, and show that the simplest planner based on a pure best-first search yields the most solid performance over a large set of problems. We also discuss the strengths and limitations of this approach, establish a correspondence between heuristic search planning and Graphplan, and briefly survey recent ideas that can reduce the current gap in performance between general heuristic search planners and specialized solvers.
Solving the find-path problem by good representation of free space Free space is represented as a union of (possibly overlapping) generalized cones. An algorithm is presented which efficiently finds good collision-free paths for convex polygonal bodies through space littered with obstacle polygons. The paths are good in the sense that the distance of closest approach to an obstacle over the path is usually far from minimal over the class of topologically equivalent collision-free paths. The algorithm is based on characterizing the volume swept by a body as it is translated and rotated as a generalized cone, and determining under what conditions one generalized cone is a subset of another.
Optimal Path Planning Generation for Mobile Robots using Parallel Evolutionary Artificial Potential Field In this paper, we introduce the concept of Parallel Evolutionary Artificial Potential Field (PEAPF) as a new method for path planning in mobile robot navigation. The main contribution of this proposal is that it makes possible controllability in complex real-world sceneries with dynamic obstacles if a reachable configuration set exists. The PEAPF outperforms the Evolutionary Artificial Potential Field (EAPF) proposal, which can also obtain optimal solutions but its processing times might be prohibitive in complex real-world situations. Contrary to the original Artificial Potential Field (APF) method, which cannot guarantee controllability in dynamic environments, this innovative proposal integrates the original APF, evolutionary computation and parallel computation for taking advantages of novel processors architectures, to obtain a flexible path planning navigation method that takes all the advantages of using the APF and the EAPF, strongly reducing their disadvantages. We show comparative experiments of the PEAPF against the APF and the EAPF original methods. The results demonstrate that this proposal overcomes both methods of implementation; making the PEAPF suitable to be used in real-time applications.
Deep Reinforcement Learning For Safe Local Planning Of A Ground Vehicle In Unknown Rough Terrain Safe unmanned ground vehicle navigation in unknown rough terrain is crucial for various tasks such as exploration, search and rescue and agriculture. Offline global planning is often not possible when operating in harsh, unknown environments, and therefore, online local planning must be used. Most online rough terrain local planners require heavy computational resources, used for optimal trajectory searching and estimating vehicle orientation in positions within the range of the sensors. In this work, we present a deep reinforcement learning approach for local planning in unknown rough terrain with zero-range to local-range sensing, achieving superior results compared to potential fields or local motion planning search spaces methods. Our approach includes reward shaping which provides a dense reward signal. We incorporate self-attention modules into our deep reinforcement learning architecture in order to increase the explainability of the learnt policy. The attention modules provide insight regarding the relative importance of sensed inputs during training and planning. We extend and validate our approach in a dynamic simulation, demonstrating successful safe local planning in environments with a continuous terrain and a variety of discrete obstacles. By adding the geometric transformation between two successive timesteps and the corresponding action as inputs, our architecture is able to navigate on surfaces with different levels of friction. Reinforcement learning, autonomous vehicle navigation, motion and path planning.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Cognitive radio: brain-empowered wireless communications Cognitive radio is viewed as a novel approach for improving the utilization of a precious natural resource: the radio electromagnetic spectrum. The cognitive radio, built on a software-defined radio, is defined as an intelligent wireless communication system that is aware of its environment and uses the methodology of understanding-by-building to learn from the environment and adapt to statistical variations in the input stimuli, with two primary objectives in mind: · highly reliable communication whenever and wherever needed; · efficient utilization of the radio spectrum. Following the discussion of interference temperature as a new metric for the quantification and management of interference, the paper addresses three fundamental cognitive tasks. 1) Radio-scene analysis. 2) Channel-state estimation and predictive modeling. 3) Transmit-power control and dynamic spectrum management. This work also discusses the emergent behavior of cognitive radio.
ImageNet Large Scale Visual Recognition Challenge. The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.
The geometry of innocent flesh on the bone: return-into-libc without function calls (on the x86) We present new techniques that allow a return-into-libc attack to be mounted on x86 executables that calls no functions at all. Our attack combines a large number of short instruction sequences to build gadgets that allow arbitrary computation. We show how to discover such instruction sequences by means of static analysis. We make use, in an essential way, of the properties of the x86 instruction set.
A world survey of artificial brain projects, Part I: Large-scale brain simulations Driven by rapid ongoing advances in computer hardware, neuroscience and computer science, Artificial Brain research and development are blossoming. This article constitutes the first half of a two-part world survey of artificial brain projects: this part dealing with large-scale brain simulations, and the second part with biologically inspired cognitive architectures (BICAs). The large-scale brain simulations we consider in depth here include those by Markram, Modha, Boahen, Horwitz, Edelman, Izhikevich, and Just. As well as reviewing the particulars of these simulation projects, we position them in a broader perspective, comparing at the different underlying definitions of the concept of ''simulation,'' noting that in many ways the projects are modeling neurosystems at different levels as well as using different methodologies.
H∞ control for sampled-data nonlinear systems described by Takagi–Sugeno fuzzy systems In this paper we consider the design problem of output feedback H∞ controllers for sampled-data fuzzy systems. We first transfer them into equivalent jump fuzzy systems. We establish the so-called Bounded Real Lemma for jump fuzzy systems and give a design method of γ-suboptimal output feedback H∞ controllers in terms of two Riccati inequalities with jumps. We then apply the main results to the sampled-data fuzzy systems and obtain a design method of γ-suboptimal output feedback H∞ controllers. We give a numerical example and construct a γ-suboptimal output feedback H∞ controller.
The accelerator store: A shared memory framework for accelerator-based systems This paper presents the many-accelerator architecture, a design approach combining the scalability of homogeneous multi-core architectures and system-on-chip's high performance and power-efficient hardware accelerators. In preparation for systems containing tens or hundreds of accelerators, we characterize a diverse pool of accelerators and find each contains significant amounts of SRAM memory (up to 90&percnt; of their area). We take advantage of this discovery and introduce the accelerator store, a scalable architectural component to minimize accelerator area by sharing its memories between accelerators. We evaluate the accelerator store for two applications and find significant system area reductions (30&percnt;) in exchange for small overheads (2&percnt; performance, 0&percnt;--8&percnt; energy). The paper also identifies new research directions enabled by the accelerator store and the many-accelerator architecture.
3.4 A 36Gb/s PAM4 transmitter using an 8b 18GS/S DAC in 28nm CMOS At data rates beyond 10Gb/s, most wireline links employ NRZ signaling. Serial NRZ links as high as 56Gb/s and 60Gb/s have been reported [1]. Nevertheless, as the rate increases, the constraints imposed by the channel, package, and die become more severe and do not benefit from process scaling in the same fashion that circuit design does. Reflections from impedance discontinuities in the PCB and package caused by vias and connectors introduce significant signal loss and distortions at higher frequencies. Even with an ideal channel, at every package-die interface, there is an intrinsic parasitic capacitance due to the pads and the ESD circuit amounting to at least 150fF, and a 50Ω resistor termination at both the transmit and receive ends resulting in an intrinsic pole at 23GHz or lower. In light of all these limitations, serial NRZ signaling beyond 60Gb/s appears suboptimal in terms of both power and performance. Utilizing various modulation techniques such as PAM4, one can achieve a higher spectral efficiency [2]. To enable such transmission formats, high-speed moderate-resolution data converters are required. This paper describes a 36Gb/s transmitter based on an 18GS/s 8b DAC implemented in 28nm CMOS, compliant to the new IEEE802.3bj standard for 100G Ethernet over backplane and copper cables [3].
Power Efficiency Comparison of Event-Driven and Fixed-Rate Signal Conversion and Compression for Biomedical Applications Energy-constrained biomedical recording systems need power-efficient data converters and good signal compression in order to meet the stringent power consumption requirements of many applications. In literature today, typically a SAR ADC in combination with digital compression is used. Recently, alternative event-driven sampling techniques have been proposed that incorporate compression in the ADC, such as level-crossing A/D conversion. This paper describes the power efficiency analysis of such level-crossing ADC (LCADC) and the traditional fixed-rate SAR ADC with simple compression. A model for the power consumption of the LCADC is derived, which is then compared to the power consumption of the SAR ADC with zero-order hold (ZOH) compression for multiple biosignals (ECG, EMG, EEG, and EAP). The LCADC is more power efficient than the SAR ADC up to a cross-over point in quantizer resolution (for example 8 bits for an EEG signal). This cross-over point decreases with the ratio of the maximum to average slope in the signal of the application. It also changes with the technology and design techniques used. The LCADC is thus suited for low to medium resolution applications. In addition, the event-driven operation of an LCADC results in fewer data to be transmitted in a system application. The event-driven LCADC without timer and with single-bit quantizer achieves a reduction in power consumption at system level of two orders of magnitude, an order of magnitude better than the SAR ADC with ZOH compression. At system level, the LCADC thus offers a big advantage over the SAR ADC.
1.24
0.24
0.24
0.24
0.24
0
0
0
0
0
0
0
0
0
Selective State Retention Power Gating Based on Formal Verification This work is aimed to reduce the area and power consumption in low-power VLSI design. A new selective approach for State Retention Power Gating (SRPG) based on Module Checking formal verification techniques is presented, and so-called Selective SRPG (SSRPG). The proposed approach is applied in order to minimize the number of retention flip flops required for state retention during sleep mode. The proposed technique automatically selects a reduced set of retention flip flops which include only the indispensable flip flops required for a proper state recovery using some unique criteria. The criteria are represented as a set of formal properties using propositional formulas to analyze the flip-flop's input equations. Those properties are expressed in temporal logic formalism, specifically, in Computation Tree Logic (CTL). The extraction of the essential retention flip flops is carried out using common formal verification techniques. This work suggests an efficient alternative to the conventional SRPG and PG techniques. The proposed approach has been applied to a practical design with about 3000 FFs. The results demonstrate a saving factor of about 80% comparing to SRPG and thus reducing area, static power consumption and synthesis tool convergence run time. This leads to significant potential area reduction of up to 10% of the total chip area and similar energy impact. Other few published related SSRPG techniques require either exhaustive simulations or impractical design representation, and are not aimed to classify a specific flip flop in a given physical design. To the best of our knowledge this is the first time common Formal Verification Tools are used for applying a Selective SRPG approach.
Systematic software-based self-test for pipelined processors Software-based self-test (SBST) has recently emerged as an effective methodology for the manufacturing test of processors and other components in systems-on-chip (SoCs). By moving test related functions from external resources to the SoC's interior, in the form of test programs that the on-chip processor executes, SBST significantly reduces the need for high-cost, big-iron testers, and enables high-quality at-speed testing and performance binning. Thus far, SBST approaches have focused almost exclusively on the functional (programmer visible) components of the processor. In this paper, we analyze the challenges involved in testing an important component of modern processors, namely, the pipelining logic, and propose a systematic SBST methodology to address them. We first demonstrate that SBST programs that only target the functional components of the processor are not sufficient to test the pipeline logic, resulting in a significant loss of overall processor fault coverage. We further identify the testability hotspots in the pipeline logic using two fully pipelined reduced instruction set computer (RISC) processor benchmarks. Finally, we develop a systematic SBST methodology that enhances existing SBST programs so that they comprehensively test the pipeline logic. The proposed methodology is complementary to previous SBST techniques that target functional components (their results can form the input to our methodology, and thus we can reuse the test development effort behind preexisting SBST programs). We automate our methodology and incorporate it in an integrated software environment (developed using Java, XML, and archC) for the automatic generation of SBST routines for microprocessors. We apply the methodology to the two complex benchmark RISC processors with respect to two fault models: stuck-at fault model and transition delay fault model. Simulation results show that our methodology provides significant improvements for the two fault models, both for the ent- - ire processor (12% fault coverage improvement on average) and for the pipeline logic itself (19% fault coverage improvement on average), compared to a conventional SBST approach.
The ForSpec Temporal Logic: A New Temporal Property-Specification Language In this paper we describe the ForSpec Temporal Logic (FTL), the new temporal property-specification logic of ForSpec, Intel's new formal specification language. The key features of FTL are as follows: it is a linear temporal logic, based on Pnueli's LTL, it is based on a rich set of logical and arithmetical operations on bit vectors to describe state properties, it enables the user to define temporal connectives over time windows, it enables the user to define regular events, which are regular sequences of Boolean events, and then relate such events via special connectives, it enables the user to express properties about the past, and it includes constructs that enable the user to model multiple clock and reset signals, which is useful in the verification of hardware design.
Accelerating microprocessor silicon validation by exposing ISA diversity Microprocessor design validation is a time consuming and costly task that tends to be a bottleneck in the release of new architectures. The validation step that detects the vast majority of design bugs is the one that stresses the silicon prototypes by applying huge numbers of random tests. Despite its bug detection capability, this step is constrained by extreme computing needs for random tests simulation to extract the bug-free memory image for comparison with the actual silicon image. We propose a self-checking method that accelerates silicon validation and significantly increases the number of applied random tests to improve bug detection efficiency and reduce time-to-market. Analysis of four major ISAs (ARM, MIPS, PowerPC, and x86) reveals their inherent diversity: more than three quarters of the instructions can be replaced with equivalent instructions. We exploit this property in post-silicon validation and propose a methodology for the generation of random tests that detect bugs by comparing results of equivalent instructions. We support our bug detection method in hardware with a light-weight mechanism which, in case of a mismatch, replays the random test replacing the offending instruction with its equivalent. Our bug detection method and corresponding hardware significantly accelerate the post-silicon validation process. Evaluation of the method on an x86 microprocessor model demonstrates its efficiency over simulation-based and self-checking alternatives, in terms of bug detection capabilities and validation time speedup.
Microprocessor design faults The complexity of modern microprocessors is such that design faults cannot be avoided. Such design faults can have serious consequences in critical applications. This paper proposes that information should be available from suppliers so that users can assess the suitability of a particular device and take remedial action, should a fault be discovered.
Secure Path Verification Many embedded systems, like medical, sensing, automotive, military, require basic security functions, often referred to as "secure communications". Nowadays, interest has been growing around defining new security related properties, expressing relationships with information flow and access control. In particular, novel research works are focused on formalizing generic security requirements as propagation properties. These kinds of properties, we name them Path properties, are used to see whether it is possible to leak secure data via unexpected paths. In this paper we compare Path properties, described above, with formal security properties expressed in CTL Logic, named Taint properties. We also compare two verification techniques used to verify Path and Taint properties considering an abstraction of a Secure Embedded Architecture discussing the advantages and drawbacks of each approach.
Threadmill: A post-silicon exerciser for multi-threaded processors Post-silicon validation poses unique challenges that bring-up tools must face, such as the lack of observability into the design, the typical instability of silicon bring-up platforms and the absence of supporting software (like an OS or debuggers). These challenges and the need to reach an optimal utilization of the expensive but very fast silicon platforms lead to unique design considerations - like the need to keep the tool simple and to perform most of its operation on platform without interaction with the environment. In this paper we describe a variety of novel techniques optimized for the unique characteristics of the silicon platform. These techniques are implemented in Threadmill - a bare-metal exerciser targeting multi-threaded processors. Threadmill was used in the verification of the POWER7 processor with encouraging results.
Ad-hoc On-Demand Distance Vector Routing This paper describes work carried out as part of the GUIDE project at Lancaster University. The overall aim of the project is to develop a context-sensitive tourist guide for visitors to the city of Lancaster. Visitors are equipped with portable GUIDE ...
Geographic Gossip: Efficient Averaging for Sensor Networks Gossip algorithms for distributed computation are attract ive due to their simplicity, distributed nature, and robust ness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repea tedly recirculating redundant information. For realistic senso r network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing t imes of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of n and p n respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy ǫ using O( n 1.5 p log n log ǫ 1) radio transmissions, which yields a q n log n factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental
Hey, you, get off of my cloud: exploring information leakage in third-party compute clouds Third-party cloud computing represents the promise of outsourcing as applied to computation. Services, such as Microsoft's Azure and Amazon's EC2, allow users to instantiate virtual machines (VMs) on demand and thus purchase precisely the capacity they require when they require it. In turn, the use of virtualization allows third-party cloud providers to maximize the utilization of their sunk capital costs by multiplexing many customer VMs across a shared physical infrastructure. However, in this paper, we show that this approach can also introduce new vulnerabilities. Using the Amazon EC2 service as a case study, we show that it is possible to map the internal cloud infrastructure, identify where a particular target VM is likely to reside, and then instantiate new VMs until one is placed co-resident with the target. We explore how such placement can then be used to mount cross-VM side-channel attacks to extract information from a target VM on the same machine.
An artificial neural network (p,d,q) model for timeseries forecasting Artificial neural networks (ANNs) are flexible computing frameworks and universal approximators that can be applied to a wide range of time series forecasting problems with a high degree of accuracy. However, despite all advantages cited for artificial neural networks, their performance for some real time series is not satisfactory. Improving forecasting especially time series forecasting accuracy is an important yet often difficult task facing forecasters. Both theoretical and empirical findings have indicated that integration of different models can be an effective way of improving upon their predictive performance, especially when the models in the ensemble are quite different. In this paper, a novel hybrid model of artificial neural networks is proposed using auto-regressive integrated moving average (ARIMA) models in order to yield a more accurate forecasting model than artificial neural networks. The empirical results with three well-known real data sets indicate that the proposed model can be an effective way to improve forecasting accuracy achieved by artificial neural networks. Therefore, it can be used as an appropriate alternative model for forecasting task, especially when higher forecasting accuracy is needed.
Minimum-Cost Data Delivery in Heterogeneous Wireless Networks With various wireless technologies developed, a ubiquitous and integrated architecture is envisioned for future wireless communication. An important optimization issue in such an integrated system is how to minimize the overall communication cost by intelligently utilizing the available heterogeneous wireless technologies while, at the same time, meeting the quality-of-service requirements of mobi...
CCFI: Cryptographically Enforced Control Flow Integrity Control flow integrity (CFI) restricts jumps and branches within a program to prevent attackers from executing arbitrary code in vulnerable programs. However, traditional CFI still offers attackers too much freedom to chose between valid jump targets, as seen in recent attacks. We present a new approach to CFI based on cryptographic message authentication codes (MACs). Our approach, called cryptographic CFI (CCFI), uses MACs to protect control flow elements such as return addresses, function pointers, and vtable pointers. Through dynamic checks, CCFI enables much finer-grained classification of sensitive pointers than previous approaches, thwarting all known attacks and resisting even attackers with arbitrary access to program memory. We implemented CCFI in Clang/LLVM, taking advantage of recently available cryptographic CPU instructions (AES-NI). We evaluate our system on several large software packages (including nginx, Apache and memcache) as well as all their dependencies. The cost of protection ranges from a 3--18% decrease in server request rate. We also expect this overhead to shrink as Intel improves the performance AES-NI.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.2
0.2
0.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
Opportunistic computing in GPU architectures Data transfer overhead between computing cores and memory hierarchy has been a persistent issue for von Neumann architectures and the problem has only become more challenging with the emergence of manycore systems. A conceptually powerful approach to mitigate this overhead is to bring the computation closer to data, known as Near Data Computing (NDC). Recently, NDC has been investigated in different flavors for CPU-based multicores, while the GPU domain has received little attention. In this paper, we present a novel NDC solution for GPU architectures with the objective of minimizing on-chip data transfer between the computing cores and Last-Level Cache (LLC). To achieve this, we first identify frequently occurring Load-Compute-Store instruction chains in GPU applications. These chains, when offloaded to a compute unit closer to where the data resides, can significantly reduce data movement. We develop two offloading techniques, called LLC-Compute and Omni-Compute. The first technique, LLC-Compute, augments the LLCs with computational hardware for handling the computation offloaded to them. The second technique (Omni-Compute) employs simple bookkeeping hardware to enable GPU cores to compute instructions offloaded by other GPU cores. Our experimental evaluations on nine GPGPU workloads indicate that the LLC-Compute technique provides, on an average, 19% performance improvement (IPC), 11% performance/watt improvement, and 29% reduction in on-chip data movement compared to the baseline GPU design. The Omni-Compute design boosts these benefits to 31%, 16% and 44%, respectively.
Architecture Aware Partitioning Algorithms Existing partitioning algorithms provide limited support for load balancing simulations that are performed on heterogeneous parallel computing platforms. On such architectures, effec- tive load balancing can only be achieved if the graph is distributed so that it properly takes into account the available resources (CPU speed, network bandwidth). With heterogeneous tech- nologies becoming more popular, the need for suitable graph partitioning algorithms is criti- cal. We developed such algorithms that can address the partitioning requirements of scientific computations, and can correctly model the architectural characteristics of emerging hardware platforms.
AMD Fusion APU: Llano The Llano variant of the AMD Fusion accelerated processor unit (APU) deploys AMD Turbo CORE technology to maximize processor performance within the system's thermal design limits. Low-power design and performance/watt ratio optimization were key design approaches, and power gating is implemented pervasively across the APU.
Decoupling Data Supply from Computation for Latency-Tolerant Communication in Heterogeneous Architectures. In today’s computers, heterogeneous processing is used to meet performance targets at manageable power. In adopting increased compute specialization, however, the relative amount of time spent on communication increases. System and software optimizations for communication often come at the costs of increased complexity and reduced portability. The Decoupled Supply-Compute (DeSC) approach offers a way to attack communication latency bottlenecks automatically, while maintaining good portability and low complexity. Our work expands prior Decoupled Access Execute techniques with hardware/software specialization. For a range of workloads, DeSC offers roughly 2 × speedup, and additional specialized compression optimizations reduce traffic between decoupled units by 40%.
Stream Floating: Enabling Proactive and Decentralized Cache Optimizations As multicore systems continue to grow in scale and on-chip memory capacity, the on-chip network bandwidth and latency become problematic bottlenecks. Because of this, overheads in data transfer, the coherence protocol and replacement policies become increasingly important. Unfortunately, even in well-structured programs, many natural optimizations are difficult to implement because of the reactive...
Decentralized Offload-based Execution on Memory-centric Compute Cores.
QsCores: trading dark silicon for scalable energy efficiency with quasi-specific cores Transistor density continues to increase exponentially, but power dissipation per transistor is improving only slightly with each generation of Moore's law. Given the constant chip-level power budgets, this exponentially decreases the percentage of transistors that can switch at full frequency with each technology generation. Hence, while the transistor budget continues to increase exponentially, the power budget has become the dominant limiting factor in processor design. In this regime, utilizing transistors to design specialized cores that optimize energy-per-computation becomes an effective approach to improve system performance. To trade transistors for energy efficiency in a scalable manner, we propose Quasi-specific Cores, or QsCores, specialized processors capable of executing multiple general-purpose computations while providing an order of magnitude more energy efficiency than a general-purpose processor. The QsCores design flow is based on the insight that similar code patterns exist within and across applications. Our approach exploits these similar code patterns to ensure that a small set of specialized cores support a large number of commonly used computations. We evaluate QsCores's ability to target both a single application library (e.g., data structures) as well as a diverse workload consisting of applications selected from different domains (e.g., SPECINT, EEMBC, and Vision). Our results show that QsCores can provide 18.4 x better energy efficiency than general-purpose processors while reducing the amount of specialized logic required to support the workload by up to 66%.
Wireless sensor networks: a survey This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed.
Network-based robust H∞ control of systems with uncertainty This paper is concerned with the design of robust H"~ controllers for uncertain networked control systems (NCSs) with the effects of both the network-induced delay and data dropout taken into consideration. A new analysis method for H"~ performance of NCSs is provided by introducing some slack matrix variables and employing the information of the lower bound of the network-induced delay. The designed H"~ controller is of memoryless type, which can be obtained by solving a set of linear matrix inequalities. Numerical examples and simulation results are given finally to illustrate the effectiveness of the method.
Incremental Stochastic Subgradient Algorithms for Convex Optimization This paper studies the effect of stochastic errors on two constrained incremental subgradient algorithms. The incremental subgradient algorithms are viewed as decentralized network optimization algorithms as applied to minimize a sum of functions, when each component function is known only to a particular agent of a distributed network. First, the standard cyclic incremental subgradient algorithm is studied. In this, the agents form a ring structure and pass the iterate in a cycle. When there are stochastic errors in the subgradient evaluations, sufficient conditions on the moments of the stochastic errors are obtained that guarantee almost sure convergence when a diminishing step-size is used. In addition, almost sure bounds on the algorithm's performance with a constant step-size are also obtained. Next, the Markov randomized incremental subgradient method is studied. This is a noncyclic version of the incremental algorithm where the sequence of computing agents is modeled as a time nonhomogeneous Markov chain. Such a model is appropriate for mobile networks, as the network topology changes across time in these networks. Convergence results and error bounds for the Markov randomized method in the presence of stochastic errors for diminishing and constant step-sizes are obtained.
Wireless communications in the twenty-first century: a perspective Wireless communications are expected to be the dominant mode of access technology in the next century. Besides voice, a new range of services such as multimedia, high-speed data, etc. are being offered for delivery over wireless networks. Mobility will be seamless, realizing the concept of persons being in contact anywhere, at any time. Two developments are likely to have a substantial impact on t...
A 60-GHz 16QAM/8PSK/QPSK/BPSK Direct-Conversion Transceiver for IEEE802.15.3c. This paper presents a 60-GHz direct-conversion transceiver using 60-GHz quadrature oscillators. The transceiver has been fabricated in a standard 65-nm CMOS process. It in cludes a receiver with a 17.3-dB conversion gain and less than 8.0-dB noise figure, a transmitter with a 18.3-dB conversion gain, a 9.5-dBm output 1 dB compression point, a 10.9-dBm saturation output power and 8.8-% power added ...
Reduction and IR-drop compensations techniques for reliable neuromorphic computing systems Neuromorphic computing system (NCS) is a promising architecture to combat the well-known memory bottleneck in Von Neumann architecture. The recent breakthrough on memristor devices made an important step toward realizing a low-power, small-footprint NCS on-a-chip. However, the currently low manufacturing reliability of nano-devices and the voltage IR-drop along metal wires and memristors arrays severely limits the scale of memristor crossbar based NCS and hinders the design scalability. In this work, we propose a novel system reduction scheme that significantly lowers the required dimension of the memristor crossbars in NCS while maintaining high computing accuracy. An IR-drop compensation technique is also proposed to overcome the adverse impacts of the wire resistance and the sneak-path problem in large memristor crossbar designs. Our simulation results show that the proposed techniques can improve computing accuracy by 27.0% and 38.7% less circuit area compared to the original NCS design.
A Bidirectional Neural Interface IC With Chopper Stabilized BioADC Array and Charge Balanced Stimulator. We present a bidirectional neural interface with a 4-channel biopotential analog-to-digital converter (bioADC) and a 4-channel current-mode stimulator in 180 nm CMOS. The bioADC directly transduces microvolt biopotentials into a digital representation without a voltage-amplification stage. Each bioADC channel comprises a continuous-time first-order ΔΣ modulator with a chopper-stabilized OTA input ...
1.2
0.2
0.2
0.2
0.2
0.1
0.02
0
0
0
0
0
0
0
Distributed Resource Allocation Over Directed Graphs via Continuous-Time Algorithms This paper investigates the resource allocation problem for a group of agents communicating over a strongly connected directed graph, where the total objective function of the problem is composted of the sum of the local objective functions incurred by the agents. With local convex sets, we first design a continuous-time projection algorithm over a strongly connected and weight-balanced directed graph. Our convergence analysis indicates that when the local objective functions are strongly convex, the output state of the projection algorithm could asymptotically converge to the optimal solution of the resource allocation problem. In particular, when the projection operation is not involved, we show the exponential convergence at the equilibrium point of the algorithm. Second, we propose an adaptive continuous-time gradient algorithm over a strongly connected and weight-unbalanced directed graph for the reduced case without local convex sets. In this case, we prove that the adaptive algorithm converges exponentially to the optimal solution of the considered problem, where the local objective functions and their gradients satisfy strong convexity and Lipachitz conditions, respectively. Numerical simulations illustrate the performance of our algorithms.
Distributed Continuous-Time Optimization With Scalable Adaptive Event-Based Mechanisms This paper investigates the distributed continuous-time optimization problem, which consists of a group of agents with variant local cost functions. An adaptive consensus-based algorithm with event triggering communications is introduced, which can drive the participating agents to minimize the global cost function and exclude the Zeno behavior. Compared to the existing results, the proposed event-based algorithm is independent of the parameters of the cost functions, using only the relative information of neighboring agents, and hence is fully distributed. Furthermore, the constraints of the convexity of the cost functions are relaxed.
FROST -- Fast row-stochastic optimization with uncoordinated step-sizes. In this paper, we discuss distributed optimization over directed graphs, where doubly stochastic weights cannot be constructed. Most of the existing algorithms overcome this issue by applying push-sum consensus, which utilizes column-stochastic weights. The formulation of column-stochastic weights requires each agent to know (at least) its out-degree, which may be impractical in, for example, broadcast-based communication protocols. In contrast, we describe FROST (Fast Row-stochastic-Optimization with uncoordinated STep-sizes), an optimization algorithm applicable to directed graphs that does not require the knowledge of out-degrees, the implementation of which is straightforward as each agent locally assigns weights to the incoming information and locally chooses a suitable step-size. We show that FROST converges linearly to the optimal solution for smooth and strongly convex functions given that the largest step-size is positive and sufficiently small.
A Continuous-Time Algorithm for Distributed Optimization Based on Multiagent Networks Based on the multiagent networks, this paper introduces a continuous-time algorithm to deal with distributed convex optimization. Using nonsmooth analysis and algebraic graph theory, the distributed network algorithm is modeled by the aid of a nonautonomous differential inclusion, and each agent exchanges information from the first-order and the second-order neighbors. For any initial point, the solution of the proposed network can reach consensus to the set of minimizers if the graph has a spanning tree. In contrast to the existing continuous-time algorithms for distributed optimization, the proposed model holds the least number of state variables and relaxes the strongly connected weighted-balanced topology to the weaker case. The modified form of the proposed continuous-time algorithm is also given, and it is proven that this algorithm is suitable for solving distributed problems if the undirected network is connected. Finally, two numerical examples and an optimal placement problem confirm the effectiveness of the proposed continuous-time algorithm.
Accelerated Convergence Algorithm for Distributed Constrained Optimization under Time-Varying General Directed Graphs. This paper studies a class of distributed convex optimization problems by a set of agents in which each agent only has access to its own local convex objective function and the estimate of each agent is restricted to both coupling linear constraint and individual box constraints. Our focus is to devise a distributed primal-dual gradient algorithm for working out the problem over a sequence of time...
Unreliable failure detectors for reliable distributed systems We introduce the concept of unreliable failure detectors and study how they can be used to solve Consensus in asynchronous systems with crash failures. We characterise unreliable failure detectors in terms of two properties—completeness and accuracy. We show that Consensus can be solved even with unreliable failure detectors that make an infinite number of mistakes, and determine which ones can be used to solve Consensus despite any number of crashes, and which ones require a majority of correct processes. We prove that Consensus and Atomic Broadcast are reducible to each other in asynchronous systems with crash failures; thus, the above results also apply to Atomic Broadcast. A companion paper shows that one of the failure detectors introduced here is the weakest failure detector for solving Consensus [Chandra et al. 1992].
A Fast and High Quality Multilevel Scheme for Partitioning Irregular Graphs Recently, a number of researchers have investigated a class of graph partitioning algorithms that reduce the size of the graph by collapsing vertices and edges, partition the smaller graph, and then uncoarsen it to construct a partition for the original graph (Bui and Jones, Proc. of the 6th SIAM Conference on Parallel Processing for Scientific Computing, 1993, 445-452; Hen- drickson and Leland, A Multilevel Algorithm for Partitioning Graphs, Tech. report SAND 93-1301, Sandia National Laboratories, Albuquerque, NM, 1993). From the early work it was clear that multilevel techniques held great promise; however, it was not known if they can be made to con- sistently produce high quality partitions for graphs arising in a wide range of application domains. We investigate the effectiveness of many different choices for all three phases: coarsening, partition of the coarsest graph, and refinement. In particular, we present a new coarsening heuristic (called heavy-edge heuristic) for which the size of the partition of the coarse graph is within a small factor of the size of the final partition obtained after multilevel refinement. We also present a much faster variation of the Kernighan-Lin (KL) algorithm for refining during uncoarsening. We test our scheme on a large number of graphs arising in various domains including finite element methods, linear pro- gramming, VLSI, and transportation. Our experiments show that our scheme produces partitions that are consistently better than those produced by spectral partitioning schemes in substantially smaller time. Also, when our scheme is used to compute fill-reducing orderings for sparse matrices, it produces orderings that have substantially smaller fill than the widely used multiple minimum degree algorithm.
Controllability and observability of Boolean control networks The controllability and observability of Boolean control networks are investigated. After a brief review on converting a logic dynamics to a discrete-time linear dynamics with a transition matrix, some formulas are obtained for retrieving network and its logical dynamic equations from this network transition matrix. Based on the discrete-time dynamics, the controllability via two kinds of inputs is revealed by providing the corresponding reachable sets precisely. Then the problem of observability is also solved by giving necessary and sufficient conditions.
The M-Machine multicomputer The M-Machine is an experimental multicomputer being developed to test architectural concepts motivated by the constraints of modern semiconductor technology and the demands of programming systems. The M-Machine computing nodes are connected with a 3-D mesh network; each node is a multithreaded processor incorporating 9 function units, on-chip cache, and local memory. The multiple function units are used to exploit both instruction-level and thread-level parallelism. A user accessible message passing system yields fast communication and synchronization between nodes. Rapid access to remote memory is provided transparently to the user with a combination of hardware and software mechanisms. This paper presents the architecture of the M-Machine and describes how its mechanisms attempt to maximize both single thread performance and overall system throughput. The architecture is complete and the MAP chip, which will serve as the M-Machine processing node, is currently being implemented.
SPONGENT: a lightweight hash function This paper proposes spongent - a family of lightweight hash functions with hash sizes of 88 (for preimage resistance only), 128, 160, 224, and 256 bits based on a sponge construction instantiated with a present-type permutation, following the hermetic sponge strategy. Its smallest implementations in ASIC require 738, 1060, 1329, 1728, and 1950 GE, respectively. To our best knowledge, at all security levels attained, it is the hash function with the smallest footprint in hardware published so far, the parameter being highly technology dependent. spongent offers a lot of flexibility in terms of serialization degree and speed. We explore some of its numerous implementation trade-offs. We furthermore present a security analysis of spongent. Basing the design on a present-type primitive provides confidence in its security with respect to the most important attacks. Several dedicated attack approaches are also investigated.
Noise Analysis and Simulation Method for a Single-Slope ADC With CDS in a CMOS Image Sensor Many mixed-signal circuits are nonlinear time-varying systems whose noise estimation cannot be obtained from the conventional frequency domain noise simulation (FNS). Although the transient noise simulation (TNS) supported by a commercial simulator takes into account nonlinear time-varying characteristics of the circuit, its simulation time is unacceptably long to obtain meaningful noise estimatio...
Practical Timing Side Channel Attacks against Kernel Space ASLR Due to the prevalence of control-flow hijacking attacks, a wide variety of defense methods to protect both user space and kernel space code have been developed in the past years. A few examples that have received widespread adoption include stack canaries, non-executable memory, and Address Space Layout Randomization (ASLR). When implemented correctly (i.e., a given system fully supports these protection methods and no information leak exists), the attack surface is significantly reduced and typical exploitation strategies are severely thwarted. All modern desktop and server operating systems support these techniques and ASLR has also been added to different mobile operating systems recently. In this paper, we study the limitations of kernel space ASLR against a local attacker with restricted privileges. We show that an adversary can implement a generic side channel attack against the memory management system to deduce information about the privileged address space layout. Our approach is based on the intrinsic property that the different caches are shared resources on computer systems. We introduce three implementations of our methodology and show that our attacks are feasible on four different x86-based CPUs (both 32- and 64-bit architectures) and also applicable to virtual machines. As a result, we can successfully circumvent kernel space ASLR on current operating systems. Furthermore, we also discuss mitigation strategies against our attacks, and propose and implement a defense solution with negligible performance overhead.
A 12.8 GS/s Time-Interleaved ADC With 25 GHz Effective Resolution Bandwidth and 4.6 ENOB This paper presents a 12.8 GS/s 32-way hierarchically time-interleaved SAR ADC with 4.6 ENOB in 65 nm CMOS. The prototype utilizes hierarchical sampling and cascode sampler circuits to enable greater than 25 GHz 3 dB effective resolution bandwidth (ERBW). We further employ a pseudo-differential SAR ADC to save power and area. The core circuit occupies only 0.23 mm 2 and consumes a total of 162 mW from dual 1.2 V/1.1 V supplies. The design achieves a SNDR of 29.4 dB at low frequencies and 26.4 dB at 25 GHz, resulting in a figure-of-merit of 0.79 pJ/conversion-step. As will be further described in the paper, the circuit architecture used in this prototype enables expansion to 25.6 GS/s or 51.2 GS/s via additional interleaving without significantly impacting ERBW.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.2
0.2
0.1
0.05
0.013333
0
0
0
0
0
0
0
0
0
Analysis of Measurement and Application of Digital to Analog Converters for Software Defined Radio Hybrid System. Software defined radio (SDR) and cognitive radio have become the development trend for military or civilian radio stations. This paper analyzes the digital to analog (DAC) system based on software defined radio applications. The measurement procedure and calibration technique for hybrid signal system are described in detail. In order to maximize the utilization of the image spectrum and achieve better performance for communication, the sub-Nyquist rate DAC system with different construction modes is introduced. The two-phase holding reconstruction mode utilizes higher order image spectrum by adjusting duty cycle of the two phases. This technique based on SRD has the merit of lower power consumption and higher efficiency for communication system.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A spur reduction architecture for multiphase fractional PLLs In this study, a multiphase fractional phase-locked loop (PLL) is presented with methods to reduce spurs. General causes of spurs are non-idealities of the phase-frequency detector (PFD) and charge pump (CP) and phase errors between the adjacent phases from the oscillator. In the architecture, the non-idealities of the PFD and CP are compensated and a phase correction circuit is added after the os...
A Hybrid Spur Compensation Technique For Finite-Modulo Fractional- Phase-Locked Loops A finite-modulo fractional-PLL utilizing a low-bit high-order Delta Sigma modulator is presented. A 4-bit fourth-order Delta Sigma modulator not only performs non-dithered 16-modulo fractional-N operation but also offers less spur generation with negligible quantization noise. Further spur reduction is achieved by charge compensation in the voltage domain and phase interpolation in the time domain, which significantly relaxes the dynamic range requirement of the charge pump compensation current. A 1.8-2.6 GHz fractional-PLL is implemented in 0.18 mu m CMOS. By employing high-order deterministic Delta Sigma modul ation and hybrid spur compensation, the spur level of less than -55 dBc is achieved when the ratio of the bandwidth to minimum frequency resolution is set to 1/4. The prototype PLL consumes 35.3 mW in which only 2.7 mW is consumed by the digital modulator and compensation circuits.
Chameleon: a dual-mode 802.11b/Bluetooth receiver system design In this paper, an approach to map the Bluetooth and 802.11b standards specifications into an architecture and specifications for the building blocks of a dual-mode direct conversion receiver is proposed. The design procedure focuses on optimizing the performance in each operating mode while attaining an efficient dual-standard solution. The impact of the expected receiver nonidealities and the characteristics of each building block are evaluated through bit-error-rate simulations. The proposed receiver design is verified through a fully integrated implementation from low-noise amplifier to analog-to-digital converter using IBM 0.25-μm BiCMOS technology. Experimental results from the integrated prototype meet the specifications from both standards and are in good agreement with the target sensitivity.
A Spur Elimination Technique for Phase Interpolation-Based Fractional-N PLLs. A fractional spur elimination technique that enables wide-bandwidth phase interpolation-based fractional-N phase-locked loops (PLLs) is proposed. The technique uses specially filtered dither to eliminate the spurious tones otherwise caused by inevitable phase errors. The design of a wide-bandwidth fractional-N PLL based on the spur elimination technique and a theoretical proof of the proposed tech...
A 700-kHz bandwidth ΣΔ fractional synthesizer with spurs compensation and linearization techniques for WCDMA applications A ΣΔ fractional-N frequency synthesizer targeting WCDMA receiver specifications is presented. Through spurs compensation and linearization techniques, the PLL bandwidth is significantly extended with only a slight increase in the integrated phase noise. In a 0.18-μm standard digital CMOS technology a fully integrated prototype with 2.1-GHz output frequency and 35 Hz resolution has an area of 3.4 mm2 PADs included, and it consumes 28 mW. With a 3-dB closed-loop bandwidth of 700 kHz, the settling time is only 7 μs. The integrated phase noise plus spurs is -45 dBc for the first WCDMA channel (1 kHz to 1.94 MHz) and -65 dBc for the second channel (2.5 to 6.34 MHz) with a worst case in-band (unfiltered) fractional spur of -60 dBc. Given the extremely large bandwidth, the synthesizer could be used also for TX direct modulation over a broad band. The choice of such a large bandwidth, however, still limits the spur performance. A slightly smaller bandwidth would fulfill WCDMA requirements. This has been shown in a second prototype, using the same architecture but employing an external loop filter and VCO for greater flexibility and ease of testing.
Spur Reduction Techniques for Phase-Locked Loops Exploiting A Sub-Sampling Phase Detector This paper presents phase-locked loop (PLL) reference-spur reduction design techniques exploiting a sub-sampling phase detector (SSPD) (which is also referred to as a sampling phase detector). The VCO is sampled by the reference clock without using a frequency divider and an amplitude controlled charge pump is used which is inherently insensitive to mismatch. The main remaining source of the VCO reference spur is the periodic disturbance of the VCO by the sampling at the reference frequency. The underlying VCO sampling spur mechanisms are analyzed and their effect is minimized by using dummy samplers and isolation buffers. A duty-cycle-controlled reference buffer and delay-locked loop (DLL) tuning are proposed to further reduce the worst case spur level. To demonstrate the effectiveness of the proposed spur reduction techniques, a 2.21 GHz PLL is designed and fabricated in 0.18 μm CMOS technology. While using a high loop-bandwidth-to-reference-frequency ratio of 1/20, the reference spur measured from 20 chips is <; -80 dBc. The PLL consumes 3.8 mW while the in-band phase noise is -121 dBc/Hz at 200 kHz and the output jitter integrated from 10 kHz to 100 MHz is 0.3psrms.
Impossibility of distributed consensus with one faulty process The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the “Byzantine Generals” problem.
A fast and elitist multiobjective genetic algorithm: NSGA-II Multi-objective evolutionary algorithms (MOEAs) that use non-dominated sorting and sharing have been criticized mainly for: (1) their O(MN3) computational complexity (where M is the number of objectives and N is the population size); (2) their non-elitism approach; and (3) the need to specify a sharing parameter. In this paper, we suggest a non-dominated sorting-based MOEA, called NSGA-II (Non-dominated Sorting Genetic Algorithm II), which alleviates all of the above three difficulties. Specifically, a fast non-dominated sorting approach with O(MN2) computational complexity is presented. Also, a selection operator is presented that creates a mating pool by combining the parent and offspring populations and selecting the best N solutions (with respect to fitness and spread). Simulation results on difficult test problems show that NSGA-II is able, for most problems, to find a much better spread of solutions and better convergence near the true Pareto-optimal front compared to the Pareto-archived evolution strategy and the strength-Pareto evolutionary algorithm - two other elitist MOEAs that pay special attention to creating a diverse Pareto-optimal front. Moreover, we modify the definition of dominance in order to solve constrained multi-objective problems efficiently. Simulation results of the constrained NSGA-II on a number of test problems, including a five-objective, seven-constraint nonlinear problem, are compared with another constrained multi-objective optimizer, and the much better performance of NSGA-II is observed
Language-based information-flow security Current standard security practices do not provide substantial assurance that the end-to-end behavior of a computing system satisfies important security policies such as confidentiality. An end-to-end confidentiality policy might assert that secret input data cannot be inferred by an attacker through the attacker's observations of system output; this policy regulates information flow. Conventional security mechanisms such as access control and encryption do not directly address the enforcement of information-flow policies. Previously, a promising new approach has been developed: the use of programming-language techniques for specifying and enforcing information-flow policies. In this paper, we survey the past three decades of research on information-flow security, particularly focusing on work that uses static program analysis to enforce information-flow policies. We give a structured view of work in the area and identify some important open challenges.
An Introduction To Compressive Sampling Conventional approaches to sampling signals or images follow Shannon&#39;s theorem: the sampling rate must be at least twice the maximum frequency present in the signal (Nyquist rate). In the field of data conversion, standard analog-to-digital converter (ADC) technology implements the usual quantized Shannon representation - the signal is uniformly sampled at or above the Nyquist rate. This article s...
Yet another MicroArchitectural Attack:: exploiting I-Cache MicroArchitectural Attacks (MA), which can be considered as a special form of Side-Channel Analysis, exploit microarchitectural functionalities of processor implementations and can compromise the security of computational environments even in the presence of sophisticated protection mechanisms like virtualization and sandboxing. This newly evolving research area has attracted significant interest due to the broad application range and the potentials of these attacks. Cache Analysis and Branch Prediction Analysis were the only types of MA that had been known publicly. In this paper, we introduce Instruction Cache (I-Cache) as yet another source of MA and present our experimental results which clearly prove the practicality and danger of I-Cache Attacks.
An Opportunistic Cognitive MAC Protocol for Coexistence with WLAN In last decades, the demand of wireless spectrum has increased rapidly with the development of mobile communication services. Recent studies recognize that traditional fixed spectrum assignment does not use spectrum efficiently. Such a wasting phenomenon could be amended after the present of cognitive radio. Cognitive radio is a new type of technology that enables secondary usage to unlicensed user. This paper presents an opportunistic cognitive MAC protocol (OC-MAC) for cognitive radios to access unoccupied resource opportunistically and coexist with wireless local area network (WLAN). By a primary traffic predication model and transmission etiquette, OC-MAC avoids producing fatal damage to licensed users. Then a ns2 simulation model is developed to evaluate its performance in scenarios with coexisting WLAN and cognitive network.
Variable Off-Time Control Loop for Current-Mode Floating Buck Converters in LED Driving Applications A versatile controller architecture, used in current-mode floating buck converters for LED driving, is developed. State-of-the-art controllers rely on a fixed switching period and variable duty cycle, focusing on current averaging circuits. Instead, the proposed controller architecture is based on fixed peak current and adaptable off time as the average current control method. The control loop is comprised of an averaging block, transconductance amplifier, and an innovative time modulator. This modulator is intended to provide constant control loop response regardless of input voltage, current storage inductor, and number of LEDs in order to improve converter applicability for LED drivers. Fabricated in a 5 V standard 0.5 μm CMOS technology, the prototype controller is implemented and tested in a current-mode floating buck converter. The converter exhibits sound continuous conduction mode (CCM) operation for input voltages between 11 and 20 V, and a wide inductor range of 100-1000 μH. In all instances, the measured average LED current variation was lower than 10% of the desired value. A maximum conversion efficiency of 91% is obtained when driving 50 mA through four LEDs (with 14 V input voltage and an inductor of 470 μH). A stable CCM converter operation is also proven by simulation for nine LEDs and 45 V input voltage.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.1
0.1
0.1
0.1
0.05
0.025
0
0
0
0
0
0
0
0
Improving performance of network covert timing channel through Huffman coding Network covert channel is a mechanism used to transfer covert message violating security policies through network. Performance of a channel is crucial to an attacker. Some studies have improved the performance by advancing the coding mechanism, but few ones have taken account of the redundancy of covert message. This paper introduces Huffman coding scheme to compress the transferred data by exploiting redundancy, and investigates the performance of the network timing channel according to the channel capacity and covertness. A mathematical model of capacity is presented and the effects of the parameters are analyzed. The experiment examines how the network delays and the Huffman coding scheme affect the capacity and covertness, and the results demonstrate that the performance of the timing channel is improved.
An Identity Authentication Mechanism Based on Timing Covert Channel In the identity authentication, many advanced encryption techniques are applied to confirm and protect the user identity. Although the identity information is transmitted as cipher text in the Internet, the attackers can theft and fraud the identity by eavesdropping, cryptanalysis and forging. In this paper, a new identity authentication mechanism is proposed, which exploits the Timing Covert Channel (TCC) to transmit the identity information. TCC was originally a hacker technique to leak information under supervising, which uses the sending time of packets to indicate the information. In our method, the intervals between packets are applied to indicate the authentication tags. It is difficult for the attackers to eavesdrop, crack and forge the TCC identity, since the packets are too huge to analyze and the noise is different between the users and the attackers. A platform is designed to verify our proposed method. The experiment shows that the intervals and the thresholds are the key factors on the accuracy and efficiency. And it also proves our method is a secure way for identity information, which could be implanted on various network applications.
Energy Efficient Run-Time Incremental Mapping for 3-D Networks-on-Chip 3-D Networks-on-Chip(NoC) emerge as a potent solution to address both the interconnection and design complexity problems facing future Multiprocessor System-on-Chips(MPSoCs).Effective run-time mapping on such 3-D NoC-based MPSoCs can be quite challenging,as the arrival order and task graphs of the target applications are typically not known a priori,which can be further complicated by stringent energy requirements for NoC systems.This paper thus presents an energy-aware run-time incremental mapping algorithm(ERIM) for 3-D NoC which can minimize the energy consumption due to the data communications among processor cores,while reducing the fragmentation effect on the incoming applications to be mapped,and simultaneously satisfying the thermal constraints imposed on each incoming application.Specifically,incoming applications are mapped to cuboid tile regions for lower energy consumption of communication and the minimal routing.Fragment tiles due to system fragmentation can be gleaned for better resource utilization.Extensive experiments have been conducted to evaluate the performance of the proposed algorithm ERIM,and the results are compared against the optimal mapping algorithm(branch-and-bound) and two heuristic algorithms(TB and TL).The experiments show that ERIM outperforms TB and TL methods with significant energy saving(more than 10%),much reduced average response time,and improved system utilization.
Designing Analog Fountain Timing Channels: Undetectability, Robustness, and Model-Adaptation. In existing model-based timing channels, the requirement for the target model to be shared between the sender and the receiver limits the sender’s ability to adapt to changes in the inter-packet delay (IPD) distribution of the application traffic. In this paper, using analog fountain codes (AFCs) with a general model-fitting coding framework, we design timing channel schemes that allow the sender to change the target model without synchronizing with the receiver. We first propose analog fountain timing channels based on symbol transition when the application packet streams have IPD distribution that is shape similar to the distribution of AFC code symbol values. For more general packet streams, we then propose analog fountain timing channels based on symbol split in which the linearly mapped symbols are split using a symbol probability split matrix to mimic the IPD distribution of the application traffic. We use real VoIP and SSH traffic to compare the proposed schemes with model-based timing channels using LT codes and AFC. Experimental results show that both the proposed schemes are model-secure. The robustness of the two schemes is higher than the model-based timing channels using LT codes whereas not as good as those using AFC when the sender and receiver sides are synchronized with respect to the target model. Moreover, when the sender and the receiver are not synchronized with respect to the model, the robustness of the proposed schemes is significantly higher than model-based timing channels.
Efficient Post-Silicon Validation of Network-on-Chip Using Wireless Links Modern complex interconnect systems are augmented with new features to serve the increasing number of on-chip processing elements (PE). To achieve the desired performance, power and reliability in the contemporary designs; Network-on-Chips (NoC) are reinforced with additional hardware and pipeline stages. Wireless hubs are supplemented on top of the baseline wired NoC for efficient intra-chip long distance communications. With the increasing complexity of the network, it is extremely difficult to ensure the functional correctness of the interconnect module at the pre-silicon verification stage. Hence, a robust post-silicon validation mechanism for NoCs has to be devised to guarantee the error-free functioning of the system. This paper exploits the capabilities of the wireless hubs present in wireless NoC (WNoC) to establish a novel post-silicon validation model for communication networks. The proposed method facilitates a better observability of the system in case of transient packet faults like misroute and packet-drop without any additional overhead in term of trace buffer size and trace bandwidth requirement. An overall 30% improvement in fault detection and path reconstruction is observed in comparison to the wired network using this wireless scheme. The wireless transceivers constructively use the existing network to transport the traces till the external debug analyzer, thus eliminating the need of additional trace bus while elevating the speed of trace communication.
CoCo: coding-based covert timing channels for network flows In this paper, we propose CoCo, a novel framework for establishing covert timing channels. The CoCo covert channel modulates the covert message in the inter-packet delays of the network flows, while a coding algorithm is used to ensure the robustness of the covert message to different perturbations. The CoCo covert channel is adjustable: by adjusting certain parameters one can trade off different features of the covert channel, i.e., robustness, rate, and undetectability. By simulating the CoCo covert channel using different coding algorithms we show that CoCo improves the covert robustness as compared to the previous research, while being practically undetectable.
Hardware Trojan Detection through Golden Chip-Free Statistical Side-Channel Fingerprinting Statistical side channel fingerprinting is a popular hardware Trojan detection method, wherein a parametric signature of a chip is collected and compared to a trusted region in a multi-dimensional space. This trusted region is statistically established so that, despite the uncertainty incurred by process variations, the fingerprint of Trojan-free chips is expected to fall within this region while the fingerprint of Trojan-infested chips is expected to fall outside. Learning this trusted region, however, assumes availability of a small set of trusted (i.e. \"golden\") chips. Herein, we rescind this assumption and we demonstrate that an almost equally effective trusted region can be learned through a combination of a trusted simulation model, measurements from process control monitors (PCMs) which are typically present either on die or on wafer kerf, and advanced statistical tail modeling techniques. Effectiveness of this method is evaluated using silicon measurements from two hardware Trojan-infested versions of a wireless cryptographic integrated circuit.
A Bayesian Method for the Induction of Probabilistic Networks from Data This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction of probabilistic expert systems. We extend the basic method to handle missing data and hidden (latent) variables. We show how to perform probabilistic inference by averaging over the inferences of multiple belief networks. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. Finally, we relate the methods in this paper to previous work, and we discuss open problems.
A Case for Intelligent RAM Two trends call into question the current practice of microprocessors and DRAMs being fabricated as different chips on different fab lines: 1) the gap between processor and DRAM speed is growing at 50% per year; and 2) the size and organization of memory on a single DRAM chip is becoming awkward to use in a system, yet size is growing at 60% per year. Intelligent RAM, or IRAM, merges processing and memory into a single chip to lower memory latency, increase memory bandwidth, and improve energy efficiency as well as to allow more flexible selection of memory size and organization. In addition, IRAM promises savings in power and board area. We review the state of microprocessors and DRAMs today, explore some of the opportunities and challenges for IRAMs, and finally estimate performance and energy efficiency of three IRAM designs.
On the evolution of user interaction in Facebook Online social networks have become extremely popular; numerous sites allow users to interact and share content using social links. Users of these networks often establish hundreds to even thousands of social links with other users. Recently, researchers have suggested examining the activity network - a network that is based on the actual interaction between users, rather than mere friendship - to distinguish between strong and weak links. While initial studies have led to insights on how an activity network is structurally different from the social network itself, a natural and important aspect of the activity network has been disregarded: the fact that over time social links can grow stronger or weaker. In this paper, we study the evolution of activity between users in the Facebook social network to capture this notion. We find that links in the activity network tend to come and go rapidly over time, and the strength of ties exhibits a general decreasing trend of activity as the social network link ages. For example, only 30% of Facebook user pairs interact consistently from one month to the next. Interestingly, we also find that even though the links of the activity network change rapidly over time, many graph-theoretic properties of the activity network remain unchanged.
A Digital Requantizer With Shaped Requantization Noise That Remains Well Behaved After Nonlinear Distortion A major problem in oversampling digital-to-analog converters and fractional-N frequency synthesizers, which are ubiquitous in modern communication systems, is that the noise they introduce contains spurious tones. The spurious tones are the result of digitally generated, quantized signals passing through nonlinear analog components. This paper presents a new method of digital requantization called successive requantization, special cases of which avoids the spurious tone generation problem. Sufficient conditions are derived that ensure certain statistical properties of the quantization noise, including the absence of spurious tones after nonlinear distortion. A practical example is presented and shown to satisfy these conditions.
Fully Integrated CMOS Power Amplifier With Efficiency Enhancement at Power Back-Off This paper presents a new approach for power amplifier design using deep submicron CMOS technologies. A transformer based voltage combiner is proposed to combine power generated from several low-voltage CMOS amplifiers. Unlike other voltage combining transformers, the architecture presented in this paper provides greater flexibility to access and control the individual amplifiers in a voltage comb...
A 15.5 dB, wide signal swing, dynamic amplifier using a common-mode voltage detection technique This paper presents a high-speed, low-power and wide signal swing differential dynamic amplifier using a common-mode voltage detection technique. The proposed dynamic amplifier achieves a 15.5 dB gain with less than 1 dB drop over a signal swing of 1.3 Vpp at an operating frequency of 1.5 GHz with a VDD of 1.2 V in 90 nm CMOS. The power consumption of the proposed circuit can be reduced linearly with operating frequency lowering.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.11
0.11
0.1
0.1
0.1
0.05
0.01
0
0
0
0
0
0
0
Class-C VCO With Amplitude Feedback Loop for Robust Start-Up and Enhanced Oscillation Swing We propose a feedback class-C voltage-controlled oscillator (VCO) that has robust start-up and a large oscillation amplitude. It initially starts oscillating as a conventional cross-coupled LC-VCO for robust start-up and subsequently transforms automatically into an amplitude-enhanced class-C VCO when it reaches steady-state to give improved noise performance. Detailed analysis of the start-up conditions, enhanced oscillation swing, and amplitude stability provides valuable insight into oscillator design considerations. The proposed VCO is implemented in a 0.18-μm CMOS process. The measured phase noise at room temperature is - 125 dBc/Hz at 1 MHz offset with a power dissipation of 3.4 mW at an oscillation frequency of 4.84 GHz. The figure-of-merit is -193 dBc/Hz.
A Low-Noise Self-Oscillating Mixer Using a Balanced VCO Load. A low-noise self-oscillating mixer (SOM) operating from 7.8 to 8.8 GHz is described in this paper. Three different components, the oscillator, the mixer core, and the LNA transconductor stage are assembled in a stacked configuration with full dc current-reuse from the VCO to the mixer to the LNA. The LC-tank oscillator also functions as a double-balanced IF load to the low-noise mixer core. A theoretical expression is given for the conversion gain of the SOM taking into account the time-varying nature of the IF load impedance. Measurements show that the SOM has a minimum DSB noise figure of 4.39 dB and a conversion gain of 11.6 dB. Its input P-1 (dB) is -13.6 dBm and its output P-1 dB is -2.97 dBm, while its IIP3 and OIP3 are -8.3 dBm and +3.3 dBm respectively. The chip consumes 12 mW of dc power and it occupies an area of 0.47 mm(2) without pads.
A Harmonic Class-C CMOS VCO-Based on Low Frequency Feedback Loop: Theoretical Analysis and Experimental Results A novel harmonic Class-C CMOS VCO architecture with improved phase noise performance and power efficiency is presented in this paper. The VCO is based on the widely adopted topology consisting in a crossed pair of NMOS devices refilling a symmetric resonator with a center tapered inductor and biased by a top PMOS current generator. The Class-C operation mode is obtained through a low frequency feedback loop constituted by an operational transconductance amplifier operating the difference between the inductor center tap voltage and a reference voltage, pushing gate polarization voltage of VCO crossed pair devices well below their threshold voltage. The Class-C VCO achieves a theoretical 2.9 dB phase noise improvement compared to the standard differential-pair LC-tank oscillator for the same current consumption. A prototype of the VCO is implemented in a standard RF 55 nm CMOS technology and compared to both a standard and an optimized VCO implemented in the same technology. All these VCOs share a copy of a unique resonator. The Class-C VCO is tunable over the frequency band 6.5-7.8 GHz and displaying an average phase noise lower than -127 dBc/Hz @ 1 MHz offset with a power consumption of 18 mW, for a state-of-the-art figure-of-merit of -187 dBc/Hz @ 1 MHz and -191 dBc/Hz @ 10 MHz offsets, respectively.
A General Theory of Injection Locking and Pulling in Electrical Oscillators—Part II: Amplitude Modulation in <inline-formula> <tex-math notation="LaTeX">$LC$ </tex-math></inline-formula> Oscillators, Transient Behavior, and Frequency Division A number of specialized topics within the theory of injection locking and pulling are addressed. The material builds on our impulse sensitivity function (ISF)-based, time-synchronous model of electrical oscillators under the influence of a periodic injection. First, we show how the accuracy of this model for <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$LC$ </tex-math></inline-formula> oscillators under large injection is greatly enhanced by accounting for the injection’s effect on the oscillation amplitude. In doing so, we capture the asymmetry of the lock range as well as the distinct behaviors exhibited by different <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$LC$ </tex-math></inline-formula> oscillator topologies. Existing <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$LC$ </tex-math></inline-formula> oscillator injection locking and pulling theories in the literature are subsumed as special cases. Next, a transient analysis of the dynamics of injection pulling is carried out, both within and outside of the lock range. Finally, we show how our existing framework naturally accommodates locking onto superharmonic and subharmonic injections, leading to several design considerations for injection-locked frequency dividers (ILFDs) and the implementation of a low-power dual-modulus prescaler from an injection-locked ring oscillator. Our theoretical conclusions are supported by simulations and experimental data from a variety of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$LC$ </tex-math></inline-formula> , ring, and relaxation oscillators.
A 475 mV, 4.9 GHz Enhanced Swing Differential Colpitts VCO With Phase Noise of -136 dBc/Hz at a 3 MHz Offset Frequency. A new enhanced swing differential Colpitts VCO architecture enables oscillations to go beyond both the supply voltage and ground making it suitable for low voltage operation. Analysis for the oscillation frequency, differential- and common-mode oscillations, amplitude of oscillation, and start-up condition provides insight into oscillator operation and design considerations. Operating at 4.9 GHz, ...
A Noise Circulating Oscillator This paper presents a noise circulating cross-coupled voltage-controlled oscillator (VCO) topology with a transformer-based tank. The introduced noise circulating active core greatly suppresses the effective noise power from the active devices while offering the same amount of negative resistance compared to conventional cross-coupled VCO topologies. The mechanism of noise circulation is investigated with theoretical analysis and further verified by simulation. Due to the broadband nature of the noise circulating technique, the resulting VCO phase noise in both 1/ <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$f^{2}$ </tex-math></inline-formula> and 1/ <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$f^{3}$ </tex-math></inline-formula> regions is greatly improved over a wide frequency tuning range. A prototype VCO at 2.35 GHz is implemented in a standard 130-nm bulk CMOS process with 0.36-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> core area. It draws 2.15 mA from a 1.2-V supply. The measured figure-of-merit (FoM) is 193.1/195.0/195.6 dBc/Hz at 100k/1M/10MHz offsets with a 1/ <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$f^{3}$ </tex-math></inline-formula> phase noise corner of only 50 kHz. The VCO design consistently achieves >192.8 dBc/Hz FoM at 100k/1M/10MHz offsets and <60 kHz 1/ <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$f^{3}$ </tex-math></inline-formula> phase noise corner over its entire 18.6% frequency tuning range (2.05 − 2.47 GHz). It also exhibits low supply frequency pushing of −25 and −13 MHz/V at the highest and lowest frequencies, respectively.
Highly Integrated and Tunable RF Front Ends for Reconfigurable Multiband Transceivers: A Tutorial. Architectural and circuit techniques to integrate the RF front end passive components, namely the SAW filters and duplexers that are traditionally implemented off chip, are presented. Intended for software-defined and cognitive radio platforms, tunable high-Q filters realized by CMOS switches and linear or MOS capacitors allow the integration of highly reconfigurable transceiver front ends that ar...
Analysis and optimization of direct-conversion receivers with 25% duty-cycle current-driven passive mixers The performance of zero-IF receivers with current-driven passive mixers driven by 25% duty-cycle quadrature clocks is studied and analyzed. It is shown that, in general, these receivers outperform the ones that utilize passive mixers with 50% duty-cycle clocks. The known problems in receivers with 50% duty-cycle mixers, such as having unequal high- and low-side conversion gains, unexpected IIP2 and IIP3 numbers, and IQ crosstalk, are significantly lowered due to the operating principles of the 25% duty-cycle passive mixer. It is revealed that with an intelligent sizing of the design parameters, the 25%-duty-cycle-mixer-based receiver is superior in terms of linearity, noise, and elimination of IQ crosstalk.
ImageNet Large Scale Visual Recognition Challenge. The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.
On The Advantages of Tagged Architecture This paper proposes that all data elements in a computer memory be made to be self-identifying by means of a tag. The paper shows that the advantages of the change from the traditional von Neumann machine to tagged architecture are seen in all software areas including programming systems, operating systems, debugging systems, and systems of software instrumentation. It discusses the advantages that accrue to the hardware designer in the implementation and gives examples for large- and small-scale systems. The economic costs of such an implementation for a minicomputer system are examined. The paper concludes that such a machine architecture may well be a suitable replacement for the traditional von Neumann architecture.
A 5-Gb/s ADC-Based Feed-Forward CDR in 65 nm CMOS This paper presents an ADC-based CDR that blindly samples the received signal at twice the data rate and uses these samples to directly estimate the locations of zero crossings for the purpose of clock and data recovery. We successfully confirmed the operation of the proposed CDR architecture at 5 Gb/s. The receiver is implemented in 65 nm CMOS, occupies 0.51 mm(2) and consumes 178.4 mW at 5 Gb/s.
Minimum-Cost Data Delivery in Heterogeneous Wireless Networks With various wireless technologies developed, a ubiquitous and integrated architecture is envisioned for future wireless communication. An important optimization issue in such an integrated system is how to minimize the overall communication cost by intelligently utilizing the available heterogeneous wireless technologies while, at the same time, meeting the quality-of-service requirements of mobi...
Variable Off-Time Control Loop for Current-Mode Floating Buck Converters in LED Driving Applications A versatile controller architecture, used in current-mode floating buck converters for LED driving, is developed. State-of-the-art controllers rely on a fixed switching period and variable duty cycle, focusing on current averaging circuits. Instead, the proposed controller architecture is based on fixed peak current and adaptable off time as the average current control method. The control loop is comprised of an averaging block, transconductance amplifier, and an innovative time modulator. This modulator is intended to provide constant control loop response regardless of input voltage, current storage inductor, and number of LEDs in order to improve converter applicability for LED drivers. Fabricated in a 5 V standard 0.5 μm CMOS technology, the prototype controller is implemented and tested in a current-mode floating buck converter. The converter exhibits sound continuous conduction mode (CCM) operation for input voltages between 11 and 20 V, and a wide inductor range of 100-1000 μH. In all instances, the measured average LED current variation was lower than 10% of the desired value. A maximum conversion efficiency of 91% is obtained when driving 50 mA through four LEDs (with 14 V input voltage and an inductor of 470 μH). A stable CCM converter operation is also proven by simulation for nine LEDs and 45 V input voltage.
A Hybrid 1<sup>st</sup>/2<sup>nd</sup>-Order VCO-Based CTDSM With Rail-to-Rail Artifact Tolerance for Bidirectional Neural Interface Bi-directional brain machine interfaces enable simultaneous brain activity monitoring and neural modulation. However stimulation artifact can saturate instrumentation front-end, while concurrent on-site recording is needed. This brief presents a voltage controlled-oscillator (VCO) based continuous-time <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\rm \Delta \Sigma $ </tex-math></inline-formula> modulator (CTDSM) with rail-to-rail input range and fast artifact tracking. A hybrid <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}/2^{nd}$ </tex-math></inline-formula> -order loop is designed to achieve high dynamic range (DR) and large input range. Stimulation artifact is detected by a phase counter and compensated by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}$ </tex-math></inline-formula> -order loop. The residue signal is digitized by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2^{nd}$ </tex-math></inline-formula> -order loop for high precision. Redundancy between the two loops is implemented as feedback capacitors elements with non-binary ratio to guarantee feedback stability and linearity. Fabricated in a 55-nm CMOS process, the prototype achieves 65.7dB SNDR across 10 kHz bandwidth with a full scale of 200 mV <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">pp</sub> . And a ±1.2 V input range is achieved to suppress artifacts. Saline based experiment with simultaneous stimulation and recording demonstrates that the implemented system can track and tolerate rail-to-rail stimulation artifact within 30 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{s}$ </tex-math></inline-formula> while small neural signal can be continuously monitored.
1.010806
0.010526
0.010526
0.010526
0.005346
0.005263
0.002105
0
0
0
0
0
0
0
A novel GPP-based Software-Defined Radio architecture This paper presents a new architecture for software-defined radio (SDR) platform on commodity PCs with multi-core CPUs, which uses both hardware and software techniques to address the challenges of using PC architectures for high-throughput SDR. The new architecture introduces PCI Express (PCIe) bus for high-throughput, low-latency data transfer interface between hardware platform and PC memories. It also adopts the Xenomai operating system (OS) to meet the real-time requirements of modern wireless protocols. Further, we propose an interrupt-driven model to guarantee the synchronization between hardware and software. The experimental results show that the proposed architecture can well meet the requirement of modern wireless communication systems.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A 265- $\mu$ W Fractional- ${N}$ Digital PLL With Seamless Automatic Switching Sub-Sampling/Sampling Feedback Path and Duty-Cycled Frequency-Locked Loop in 65-nm CMOS This article proposes a fractional- <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">N</italic> digital phase-locked loop (DPLL) that achieves a 265- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> ultra-low-power operation. The proposed switching feedback can seamlessly change the DPLL from sampling operation to sub-sampling operation without disturbing the phase-locked state of the DPLL to reduce the number of building blocks that works at the oscillator frequency, leading to significant power reduction. With the reduced number of high-frequency circuits, scaling the reference frequency is fully used to reduce the power consumption of the DPLL. Together with an out-of-dead-zone detector and a duty-cycled frequency-locked loop running in the background, the switching feedback achieves robust frequency and phase acquisition at start-up and helps the sub-sampling PLL recover when large phase and frequency disturbances occur. A transformer-based stacked- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$g_{m}$ </tex-math></inline-formula> oscillator is proposed to minimize the power consumption while providing the sufficient swing to drive the subsequent stages. A truncated constant-slope digital-to-time converter is proposed to improve the power efficiency while retaining good linearity. The proposed fractional- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${N}$ </tex-math></inline-formula> DPLL consumes only 265 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> while achieving an integrated jitter of 2.8 ps and a worst case fractional spur of −52 dBc, which corresponds to a figure of merit (FOM) of −237 dB.
Design of Symmetrical Class E Power Amplifiers for Very Low Harmonic-Content Applications Class E power amplifier circuits are very suitable for high efficiency power amplification applications in the radio-frequency and microwave ranges. However, due to the inherent asymmetrical driving arrangement, they suffer significant harmonic contents in the output voltage and current, and usually require substantial design efforts in achieving the desired load matching networks for applications requiring very low harmonic contents. In this paper, the design of a Class E power amplifier with resonant tank being symmetrically driven by two Class E circuits is studied. The symmetrical Class E circuit, under nominal operating conditions, has extremely low harmonic distortions, and the design of the impedance matching network for harmonic filtering becomes less critical. Practical steady-state design equations for Class E operation are derived and graphically presented. Experimental circuits are constructed for distortion evaluation. It has been found that this circuit offers total harmonic distortions which are about an order of magnitude lower than those of the conventional Class E power amplifier.
Analysis and Optimum Design of a Class E RF Power Amplifier A new analysis of a class E power amplifier is presented and a fully analytic design approach is developed. Using our analysis, all of the circuit currents and voltages and, hence, the power dissipation in each component is calculated as a function of a key design parameter, denoted by x. This parameter is the ratio of the resonance frequency of the shunt inductor and shunt capacitor to the operat...
Analysis of Circuit Noise and Non-Ideal Filtering Impact on Energy Detection Based Ultra-Low-Power Radios Performance. With the coming of age of the Internet of Things, demand on ultra-low power (ULP) radios will continue to boost tremendously. Circuit imperfections, especially in power hungry blocks, i.e., the local oscillators (LO) and band pass filters (BPFs), pose a real challenge for ULP radios designers considering their tight power budget. This brief presents an investigation on the effects of circuit non-i...
High-Efficiency Class-E Power Amplifier With Shunt Capacitance and Shunt Filter. An analysis of a novel single-ended Class-E mode with shunt capacitance and shunt filter with explicit derivation of the idealized optimum voltage and current waveforms and load-network parameters with their verification by frequency domain simulations with 50% duty ratio is presented. The ideal collector voltage and current waveforms demonstrate a possibility of 100% efficiency. The circuit desig...
Impossibility of distributed consensus with one faulty process The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the “Byzantine Generals” problem.
Ad-hoc On-Demand Distance Vector Routing This paper describes work carried out as part of the GUIDE project at Lancaster University. The overall aim of the project is to develop a context-sensitive tourist guide for visitors to the city of Lancaster. Visitors are equipped with portable GUIDE ...
A Linear Representation of Dynamics of Boolean Networks A new matrix product, called semi-tensor product of matrices, is reviewed. Using it, a matrix expression of logic is proposed, where a logical variable is expressed as a vector, a logical function is expressed as a multiple linear mapping. Under this framework, a Boolean network equation is converted into an equivalent algebraic form as a conventional discrete-time linear system. Analyzing the transition matrix of the linear system, formulas are obtained to show a) the number of fixed points; b) the numbers of cycles of different lengths; c) transient period, for all points to enter the set of attractors; and d) basin of each attractor. The corresponding algorithms are developed and used to some examples.
The geometry of innocent flesh on the bone: return-into-libc without function calls (on the x86) We present new techniques that allow a return-into-libc attack to be mounted on x86 executables that calls no functions at all. Our attack combines a large number of short instruction sequences to build gadgets that allow arbitrary computation. We show how to discover such instruction sequences by means of static analysis. We make use, in an essential way, of the properties of the x86 instruction set.
A world survey of artificial brain projects, Part I: Large-scale brain simulations Driven by rapid ongoing advances in computer hardware, neuroscience and computer science, Artificial Brain research and development are blossoming. This article constitutes the first half of a two-part world survey of artificial brain projects: this part dealing with large-scale brain simulations, and the second part with biologically inspired cognitive architectures (BICAs). The large-scale brain simulations we consider in depth here include those by Markram, Modha, Boahen, Horwitz, Edelman, Izhikevich, and Just. As well as reviewing the particulars of these simulation projects, we position them in a broader perspective, comparing at the different underlying definitions of the concept of ''simulation,'' noting that in many ways the projects are modeling neurosystems at different levels as well as using different methodologies.
MicroGP—An Evolutionary Assembly Program Generator This paper describes 驴GP, an evolutionary approach for generating assembly programs tuned for a specific microprocessor. The approach is based on three clearly separated blocks: an evolutionary core, an instruction library and an external evaluator. The evolutionary core conducts adaptive population-based search. The instruction library is used to map individuals to valid assembly language programs. The external evaluator simulates the assembly program, providing the necessary feedback to the evolutionary core. 驴GP has some distinctive features that allow its use in specific contexts. This paper focuses on one such context: test program generation for design validation of microprocessors. Reported results show 驴GP being used to validate a complex 5-stage pipelined microprocessor. Its induced test programs outperform an exhaustive functional test and an instruction randomizer, showing that engineers are able to automatically obtain high-quality test programs.
The accelerator store: A shared memory framework for accelerator-based systems This paper presents the many-accelerator architecture, a design approach combining the scalability of homogeneous multi-core architectures and system-on-chip's high performance and power-efficient hardware accelerators. In preparation for systems containing tens or hundreds of accelerators, we characterize a diverse pool of accelerators and find each contains significant amounts of SRAM memory (up to 90&percnt; of their area). We take advantage of this discovery and introduce the accelerator store, a scalable architectural component to minimize accelerator area by sharing its memories between accelerators. We evaluate the accelerator store for two applications and find significant system area reductions (30&percnt;) in exchange for small overheads (2&percnt; performance, 0&percnt;--8&percnt; energy). The paper also identifies new research directions enabled by the accelerator store and the many-accelerator architecture.
3.4 A 36Gb/s PAM4 transmitter using an 8b 18GS/S DAC in 28nm CMOS At data rates beyond 10Gb/s, most wireline links employ NRZ signaling. Serial NRZ links as high as 56Gb/s and 60Gb/s have been reported [1]. Nevertheless, as the rate increases, the constraints imposed by the channel, package, and die become more severe and do not benefit from process scaling in the same fashion that circuit design does. Reflections from impedance discontinuities in the PCB and package caused by vias and connectors introduce significant signal loss and distortions at higher frequencies. Even with an ideal channel, at every package-die interface, there is an intrinsic parasitic capacitance due to the pads and the ESD circuit amounting to at least 150fF, and a 50Ω resistor termination at both the transmit and receive ends resulting in an intrinsic pole at 23GHz or lower. In light of all these limitations, serial NRZ signaling beyond 60Gb/s appears suboptimal in terms of both power and performance. Utilizing various modulation techniques such as PAM4, one can achieve a higher spectral efficiency [2]. To enable such transmission formats, high-speed moderate-resolution data converters are required. This paper describes a 36Gb/s transmitter based on an 18GS/s 8b DAC implemented in 28nm CMOS, compliant to the new IEEE802.3bj standard for 100G Ethernet over backplane and copper cables [3].
Multi-Channel Neural Recording Implants: A Review. The recently growing progress in neuroscience research and relevant achievements, as well as advancements in the fabrication process, have increased the demand for neural interfacing systems. Brain-machine interfaces (BMIs) have been revealed to be a promising method for the diagnosis and treatment of neurological disorders and the restoration of sensory and motor function. Neural recording implants, as a part of BMI, are capable of capturing brain signals, and amplifying, digitizing, and transferring them outside of the body with a transmitter. The main challenges of designing such implants are minimizing power consumption and the silicon area. In this paper, multi-channel neural recording implants are surveyed. After presenting various neural-signal features, we investigate main available neural recording circuit and system architectures. The fundamental blocks of available architectures, such as neural amplifiers, analog to digital converters (ADCs) and compression blocks, are explored. We cover the various topologies of neural amplifiers, provide a comparison, and probe their design challenges. To achieve a relatively high SNR at the output of the neural amplifier, noise reduction techniques are discussed. Also, to transfer neural signals outside of the body, they are digitized using data converters, then in most cases, the data compression is applied to mitigate power consumption. We present the various dedicated ADC structures, as well as an overview of main data compression methods.
1.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
0
An Identity-Free and On-Demand Routing Scheme against Anonymity Threats in Mobile Ad Hoc Networks Introducing node mobility into the network also introduces new anonymity threats. This important change of the concept of anonymity has recently attracted attentions in mobile wireless security research. This paper presents identity-free routing and on-demand routing as two design principles of anonymous routing in mobile ad hoc networks. We devise ANODR (ANonymous On-Demand Routing) as the needed anonymous routing scheme that is compliant with the design principles. Our security analysis and simulation study verify the effectiveness and efficiency of ANODR.
Space-Optimal Counting in Population Protocols. In this paper, we study the fundamental problem of counting, which consists in computing the size of a system. We consider the distributed communication model of population protocols of finite state, anonymous and asynchronous mobile devices agents communicating in pairs according to a fairness condition. This work significantly improves the previous results known for counting in this model, in terms of exact space complexity. We present and prove correct the first space-optimal protocols solving the problem for two classical types of fairness, global and weak. Both protocols require no initialization of the counted agents. The protocol designed for global fairness, surprisingly, uses only one bit of memory two states per counted agent. The protocol, functioning under weak fairness, requires the necessary $$\\log P$$ bits P states, per counted agent to be able to count up to P agents. Interestingly, this protocol exploits the intriguing Gros sequence of natural numbers, which is also used in the solutions to the Chinese Rings and the Hanoi Towers puzzles.
Estimating and sampling graphs with multidimensional random walks Estimating characteristics of large graphs via sampling is a vital part of the study of complex networks. Current sampling methods such as (independent) random vertex and random walks are useful but have drawbacks. Random vertex sampling may require too many resources (time, bandwidth, or money). Random walks, which normally require fewer resources per sample, can suffer from large estimation errors in the presence of disconnected or loosely connected graphs. In this work we propose a new m-dimensional random walk that uses m dependent random walkers. We show that the proposed sampling method, which we call Frontier sampling, exhibits all of the nice sampling properties of a regular random walk. At the same time, our simulations over large real world graphs show that, in the presence of disconnected or loosely connected components, Frontier sampling exhibits lower estimation errors than regular random walks. We also show that Frontier sampling is more suitable than random vertex sampling to sample the tail of the degree distribution of the graph.
Brief Announcement: Investigating the Cost of Anonymity on Dynamic Networks In this paper we study the problem of counting processes in a synchronous dynamic network where a distinguished leader is available and other nodes share the same identifier. The network topology may change at each synchronous round and each node communicates with its neighbors by broadcasting messages. In such networks it is well known that counting requires Ω(D) rounds where D is the network diameter. We identify a non-trivial subset of dynamic net- works where counting requires Ω(log |V|) rounds even when the dynamic diameter, D, is constant with respect to the network size and the bandwidth is unlimited.
Conscious and Unconscious Counting on Anonymous Dynamic Networks. This paper addresses the problem of counting the size of a network where i processes have the same identifiers anonymous nodes and ii the network topology constantly changes dynamic network. Changes are driven by a powerful adversary that can look at internal process states and add and remove edges in order to contrast the convergence of the algorithm to the correct count. The paper proposes two leader-based counting algorithms. Such algorithms are based on a technique that mimics an energy-transfer between network nodes. The first algorithm assumes that the adversary cannot generate either disconnected network graphs or network graphs where nodes have degree greater than D. In such algorithm, the leader can count the size of the network and detect the counting termination in a finite time i.e., conscious counting algorithm. The second algorithm assumes that the adversary only keeps the network graph connected at any time and we prove that the leader can still converge to a correct count in a finite number of rounds, but it is not conscious when this convergence happens.
Opportunistic Information Dissemination in Mobile Ad-hoc Networks: The Profit of Global Synchrony The topic of this paper is the study of Information Dissemination in Mobile Ad-hoc Networks by means of deterministic protocols. We characterize the connectivity resulting from the movement, from failures and from the fact that nodes may join the computation at different times with two values, � and �, so that, withintime slots, some node that has the information must be connected to some node without it for at leasttime slots. The protocols studied are clas- sified into three classes: oblivious (the transmission schedule of a node is only a function of its ID), quasi-oblivious (the transmission schedule may also depend on a global time), and adaptive. The main contribution of this work concerns negative results. Contrasting the lower and upper bounds derived, interesting complexity gaps among protocol- classes are observed. More precisely, in order to guarantee any progress towards solving the problem, it is shown thatmust be at least n 1 in general, but that � 2 (n 2 /log n) if an oblivious protocol is used. Since quasi-oblivious protocols can guarantee progress with � 2 O(n), this represents a significant gap, almost linear in �, between oblivious and quasi-oblivious protocols. Regarding the time to complete the dissemination, a lower bound of (n� + n 3 /log n) is proved for oblivious protocols, which is tight up to a polylogarithmic factor because a constructive O(n� + n 3 log n) upper bound exists for the same class. It is also proved that adaptive protocols require (n� + n 2 ), which is optimal given that a matching upper bound can be proved for quasi-oblivious protocols. These results show that the gap in time complexity between oblivious and quasi- oblivious, and hence adaptive, protocols is almost linear. This gap is what we call the profit of global synchrony, since it represents the gain the network obtains from global synchrony with respect to not having it.
Parsimonious flooding in dynamic graphs An edge-Markovian process with birth-rate p and death-rate q generates infinite sequences of graphs (G 0, G 1, G 2,…) with the same node set [n] such that G t is obtained from G t-1 as follows: if $${e\notin E(G_{t-1})}$$ then $${e\in E(G_{t})}$$ with probability p, and if $${e\in E(G_{t-1})}$$ then $${e\notin E(G_{t})}$$ with probability q. In this paper, we establish tight bounds on the complexity of flooding in edge-Markovian graphs, where flooding is the basic mechanism in which every node becoming aware of an information at step t forwards this information to all its neighbors at all forthcoming steps t′ > t. These bounds complete previous results obtained by Clementi et al. Moreover, we also show that flooding in dynamic graphs can be implemented in a parsimonious manner, so that to save bandwidth, yet preserving efficiency in term of simplicity and completion time. For a positive integer k, we say that the flooding protocol is k-active if each node forwards an information only during the k time steps immediately following the step at which the node receives that information for the first time. We define the reachability threshold for the flooding protocol as the smallest integer k such that, for any source $${s\in [n]}$$ , the k-active flooding protocol from s completes (i.e., reaches all nodes), and we establish tight bounds for this parameter. We show that, for a large spectrum of parameters p and q, the reachability threshold is by several orders of magnitude smaller than the flooding time. In particular, we show that it is even constant whenever the ratio p/(p + q) exceeds log n/n. Moreover, we also show that being active for a number of steps equal to the reachability threshold (up to a multiplicative constant) allows the flooding protocol to complete in optimal time, i.e., in asymptotically the same number of steps as when being perpetually active. These results demonstrate that flooding can be implemented in a practical and efficient manner in dynamic graphs. The main ingredient in the proofs of our results is a reduction lemma enabling to overcome the time dependencies in edge-Markovian dynamic graphs.
Efficient and Reliable Broadcast is Achievable in an Eventually Connected Network
Distributed estimation and quantization An algorithm is developed for the design of a nonlinear, n-sensor, distributed estimation system subject to communication and computation constraints. The algorithm uses only bivariate probability distributions and yields locally optimal estimators that satisfy the required system constraints. It is shown that the algorithm is a generalization of the classical Lloyd-Max results
The software radio concept Since early 1980 an exponential blowup of cellular mobile systems has been observed, which has produced, all over the world, the definition of a plethora of analog and digital standards. In 2000 the industrial competition between Asia, Europe, and America promises a very difficult path toward the definition of a unique standard for future mobile systems, although market analyses underline the trading benefits of a common worldwide standard. It is therefore in this field that the software radio concept is emerging as a potential pragmatic solution: a software implementation of the user terminal able to dynamically adapt to the radio environment in which it is, time by time, located. In fact, the term software radio stands for radio functionalities defined by software, meaning the possibility to define by software the typical functionality of a radio interface, usually implemented in TX and RX equipment by dedicated hardware. The presence of the software defining the radio interface necessarily implies the use of DSPs to replace dedicated hardware, to execute, in real time, the necessary software. In this article objectives, advantages, problem areas, and technological challenges of software radio are addressed. In particular, SW radio transceiver architecture, possible SW implementation, and its download are analyzed
A framework for security on NoC technologies Multiple heterogeneous processor cores, memory cores and application specific IP cores integrated in a communication network, also known as networks on chips (NoCs), will handle a large number of applications including security. Although NoCs offer more resistance to bus probing attacks, power/EM attacks and network snooping attacks are relevant. For the first time, a framework for security on NoC at both the network level (or transport layer) and at the core level (or application layer) is proposed. At the network level, each IP core has a security wrapper and a key-keeper core is included in the NoC, protecting encrypted private and public keys. Using this framework, unencrypted keys are prevented from leaving the cores and NoC. This is crucial to prevent untrusted software on or off the NoC from gaining access to keys. At the core level (application layer) the security framework is illustrated with software modification for resistance against power attacks with extremely low overheads in energy. With the emergence of secure IP cores in the market and nanometer technologies, a security framework for designing NoCs is crucial for supporting future wireless Internet enabled devices.
An Opportunistic Cognitive MAC Protocol for Coexistence with WLAN In last decades, the demand of wireless spectrum has increased rapidly with the development of mobile communication services. Recent studies recognize that traditional fixed spectrum assignment does not use spectrum efficiently. Such a wasting phenomenon could be amended after the present of cognitive radio. Cognitive radio is a new type of technology that enables secondary usage to unlicensed user. This paper presents an opportunistic cognitive MAC protocol (OC-MAC) for cognitive radios to access unoccupied resource opportunistically and coexist with wireless local area network (WLAN). By a primary traffic predication model and transmission etiquette, OC-MAC avoids producing fatal damage to licensed users. Then a ns2 simulation model is developed to evaluate its performance in scenarios with coexisting WLAN and cognitive network.
Kinesis: a security incident response and prevention system for wireless sensor networks This paper presents Kinesis, a security incident response and prevention system for wireless sensor networks, designed to keep the network functional despite anomalies or attacks and to recover from attacks without significant interruption. Due to the deployment of sensor networks in various critical infrastructures, the applications often impose stringent requirements on data reliability and service availability. Given the failure- and attack-prone nature of sensor networks, it is a pressing concern to enable the sensor networks provide continuous and unobtrusive services. Kinesis is quick and effective in response to incidents, distributed in nature, and dynamic in selecting response actions based on the context. It is lightweight in terms of response policy specification, and communication and energy overhead. A per-node single timer based distributed strategy to select the most effective response executor in a neighborhood makes the system simple and scalable, while achieving proper load distribution and redundant action optimization. We implement Kinesis in TinyOS and measure its performance for various application and network layer incidents. Extensive TOSSIM simulations and testbed experiments show that Kinesis successfully counteracts anomalies/attacks and behaves consistently under various attack scenarios and rates.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.222
0.222
0.112
0.080667
0.034833
0.01
0.002951
0.000603
0
0
0
0
0
0
Epidemic Propagation With Positive and Negative Preventive Information in Multiplex Networks We propose a novel epidemic model based on two-layered multiplex networks to explore the influence of positive and negative preventive information on epidemic propagation. In the model, one layer represents a social network with positive and negative preventive information spreading competitively, while the other one denotes the physical contact network with epidemic propagation. The individuals who are aware of positive prevention will take more effective measures to avoid being infected than those who are aware of negative prevention. Taking the microscopic Markov chain (MMC) approach, we analytically derive the expression of the epidemic threshold for the proposed epidemic model, which indicates that the diffusion of positive and negative prevention information, as well as the topology of the physical contact network have a significant impact on the epidemic threshold. By comparing the results obtained with MMC and those with the Monte Carlo (MC) simulations, it is found that they are in good agreement, but MMC can well describe the dynamics of the proposed model. Meanwhile, through extensive simulations, we demonstrate the impact of positive and negative preventive information on the epidemic threshold, as well as the prevalence of infectious diseases. We also find that the epidemic prevalence and the epidemic outbreaks can be suppressed by the diffusion of positive preventive information and be promoted by the diffusion of negative preventive information.
Deadlock avoidance in flexible manufacturing systems using finite automata A distinguishing feature of a flexible manufacturing system (FMS) is the ability to perform multiple tasks in one machine or workstation (alternative machining) and the ability to process parts according to more than one sequence of operations (alternative sequencing). In this paper, we address the issue of deadlock avoidance in systems having these characteristics. A deadlock-free and maximally permissive control policy that incorporates this flexibility is developed based on finite automata models of part process plans and the FMS. The resulting supervisory controller is used for dynamic evaluation of deadlock avoidance based on the remaining processing requirements of the parts
Observability of hybrid automata by abstraction In this paper, we deal with the observability problem of a class of Hybrid Systems whose output is a timed string on a finite alphabet. We determine under which conditions it is always possible to immediately detect, using the observed output, when the system enters a given discrete state. We illustrate how to construct a Timed Automaton that is an abstraction of the given Hybrid System, and that preserves its observability properties. Moreover, we propose a verification algorithm with polynomial complexity for checking the observability of the Timed Automaton, and a constructive procedure for an observer of the discrete state.
Decentralized observability of discrete event systems with synchronizations. This paper deals with the problem of decentralized observability of discrete event systems. We consider a set of sites each capable of observing a subset of the total event set. When a synchronization occurs, each site transmits its own observation to a coordinator that decides if the word observed belongs to a reference language K or not. Two different properties are studied: uniform q-observability and q-sync observability. It is proved that both properties are decidable for regular languages. Finally, under the assumption that languages K and L are regular, and all the events are observable by at least one site, we propose a procedure to determine the instants at which synchronization should occur to detect the occurrence of any word not in K, as soon as it occurs. The advantage of the proposed approach is that most of the burdensome computations can be moved off-line.
Observability of Finite Labeled Transition Systems. Finite labeled transition systems are nondeterministic and nontotal systems with finitely many inputs, states, and outputs. This paper provides algorithms for verifying the observability of finite labeled transition systems in the so-called multiple-experiment case, the simple-experiment case, and the arbitrary-experiment case, respectively, where these algorithms run in exponential time, exponent...
Event-Triggered Control for Output Regulation of Probabilistic Logical Systems With Delays This article investigates the output regulation problem of probabilistic <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$k$ </tex-math></inline-formula> -valued logical systems with delays by an intermittent control scheme. Two types of event-triggered control are designed via semi-tensor product (STP) of matrices. According to the algebraic state-space representation of probabilistic <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$k$ </tex-math></inline-formula> -valued systems with delays, the problem is transformed into the existence of solutions of algebraic equations. Then we obtain the sufficient and necessary condition for the output regulation problem of probabilistic <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$k$ </tex-math></inline-formula> -valued logical systems with delays. Two types of approaches are given to design the event-triggered control laws. Eventually, two exemplifications are provided to explain the effectiveness and utility of the acquired results.
Observability, Reconstructibility and State Observers of Boolean Control Networks The aim of this paper is to introduce and characterize observability and reconstructibility properties for Boolean networks and Boolean control networks, described according to the algebraic approach proposed by D. Cheng and co-authors in the series of papers [3], [6], [7] and in the recent monography . A complete characterization of these properties, based both on the Boolean matrices involved in the network description and on the corresponding digraphs, is provided. Finally, the problem of state observer design for reconstructible BNs and BCNs is addressed, and two different solutions are proposed.
Wireless sensor networks: a survey This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed.
Fuzzy tracking control design for nonlinear dynamic systems via T-S fuzzy model This study introduces a fuzzy control design method for nonlinear systems with a guaranteed H∞ model reference tracking performance. First, the Takagi and Sugeno (TS) fuzzy model is employed to represent a nonlinear system. Next, based on the fuzzy model, a fuzzy observer-based fuzzy controller is developed to reduce the tracking error as small as possible for all bounded reference inputs. The advantage of proposed tracking control design is that only a simple fuzzy controller is used in our approach without feedback linearization technique and complicated adaptive scheme. By the proposed method, the fuzzy tracking control design problem is parameterized in terms of a linear matrix inequality problem (LMIP). The LMIP can be solved very efficiently using the convex optimization techniques. Simulation example is given to illustrate the design procedures and tracking performance of the proposed method
Exploring an unknown graph It is desired to explore all edges of an unknown directed, strongly connected graph. At each point one has a map of all nodes and edges visited, one can recognize these nodes and edges upon seeing them again, and it is known how many unexplored edges emanate from each node visited. The goal is to minimize the ratio of the total number of edges traversed to the optimum number of traversals had the graph been known. For Eulerian graphs this ratio cannot be better than 2, and 2 is achievable by a simple algorithm. In contrast, the ratio is unbounded when the deficiency of the graph (the number of edges that have to be added to make it Eulerian) is unbounded. The main result is an algorithm that achieves a bounded ratio when the deficiency is bounded; unfortunately the ratio is exponential in the deficiency. It is also shown that, when partial information about the graph is available, minimizing the worst-case ratio is PSPACE-complete.
An architecture for survivable coordination in large distributed systems Coordination among processes in a distributed system can be rendered very complex in a large-scale system where messages may be delayed or lost and when processes may participate only transiently or behave arbitrarily, e.g., after suffering a security breach. In this paper, we propose a scalable architecture to support coordination in such extreme conditions. Our architecture consists of a collection of persistent data servers that implement simple shared data abstractions for clients, without trusting the clients or even the servers themselves. We show that, by interacting with these untrusted servers, clients can solve distributed consensus, a powerful and fundamental coordination primitive. Our architecture is very practical and we describe the implementation of its main components in a system called Fleet.
Modeling of software radio aspects by mapping of SDL and CORBA With the evolution of 3rd generation mobile communications standardization, the software radio concept has the potential to offer a pragmatic solution - a software implementation that allows the mobile terminal to adapt dynamically to its radio environment. The mapping of SDL and CORBA mechanisms is introduced, in order to provide a generic platform for the implementation of future mobile services, supporting standardized interfaces and manufacturer platform independent object and service functionality description. For the functional entity diagram model, it is proposed that the functional entities be designed as objects, the functional entities group as 'open' object oriented SDL platforms, and the interfaces between them as CORBA IDLs, communicating via the ORB in a generic implementation and location independent way. The functional entity groups are proposed to be modeled as SDL block types, while the functional entities and sub-entities as SDL process and service types. The objects interact with each other like client or server objects requesting or receiving services from other objects. Every object has a CORBA IDL interface, which allows every component to be distributed in an optimum way by providing a standardized infrastructure, ensuring interoperability, flexibility, reusability, transparency and management capabilities.
Reduction and IR-drop compensations techniques for reliable neuromorphic computing systems Neuromorphic computing system (NCS) is a promising architecture to combat the well-known memory bottleneck in Von Neumann architecture. The recent breakthrough on memristor devices made an important step toward realizing a low-power, small-footprint NCS on-a-chip. However, the currently low manufacturing reliability of nano-devices and the voltage IR-drop along metal wires and memristors arrays severely limits the scale of memristor crossbar based NCS and hinders the design scalability. In this work, we propose a novel system reduction scheme that significantly lowers the required dimension of the memristor crossbars in NCS while maintaining high computing accuracy. An IR-drop compensation technique is also proposed to overcome the adverse impacts of the wire resistance and the sneak-path problem in large memristor crossbar designs. Our simulation results show that the proposed techniques can improve computing accuracy by 27.0% and 38.7% less circuit area compared to the original NCS design.
Power Efficiency Comparison of Event-Driven and Fixed-Rate Signal Conversion and Compression for Biomedical Applications Energy-constrained biomedical recording systems need power-efficient data converters and good signal compression in order to meet the stringent power consumption requirements of many applications. In literature today, typically a SAR ADC in combination with digital compression is used. Recently, alternative event-driven sampling techniques have been proposed that incorporate compression in the ADC, such as level-crossing A/D conversion. This paper describes the power efficiency analysis of such level-crossing ADC (LCADC) and the traditional fixed-rate SAR ADC with simple compression. A model for the power consumption of the LCADC is derived, which is then compared to the power consumption of the SAR ADC with zero-order hold (ZOH) compression for multiple biosignals (ECG, EMG, EEG, and EAP). The LCADC is more power efficient than the SAR ADC up to a cross-over point in quantizer resolution (for example 8 bits for an EEG signal). This cross-over point decreases with the ratio of the maximum to average slope in the signal of the application. It also changes with the technology and design techniques used. The LCADC is thus suited for low to medium resolution applications. In addition, the event-driven operation of an LCADC results in fewer data to be transmitted in a system application. The event-driven LCADC without timer and with single-bit quantizer achieves a reduction in power consumption at system level of two orders of magnitude, an order of magnitude better than the SAR ADC with ZOH compression. At system level, the LCADC thus offers a big advantage over the SAR ADC.
1.24
0.24
0.24
0.24
0.24
0.24
0.048
0
0
0
0
0
0
0
A 0.5–1.1-V Adaptive Bypassing SAR ADC Utilizing the Oscillation-Cycle Information of a VCO-Based Comparator A successive approximation register (SAR) analog-to-digital converter (ADC) with a voltage-controlled oscillator (VCO)-based comparator is presented in this paper. The relationship between the input voltage and the number of oscillation cycles (NOC) to reach a VCO-comparator decision is explored, implying an inherent coarse quantization in parallel with the normal comparison. The NOC as a design parameter is introduced and analyzed with noise, metastability, and tradeoff considerations. The NOC is exploited to bypass a certain number of SAR cycles for higher power efficiency of VCO-based SAR ADCs. To cope with the process, voltage, and temperature (PVT) variations, an adaptive bypassing technique is proposed, tracking and correcting window sizes in the background. Fabricated in a 40-nm CMOS process, the ADC achieves a peak effective number of bits of 9.71 b at 10 MS/s. Walden figure of merit (FoM) of 2.4–6.85 fJ/conv.-step is obtained over a wide range of supply voltages and sampling rates. Measurement has been carried out under typical, fast-fast, and slow-slow process corners and 0 °C–100 °C temperature range, showing that the proposed ADC is robust over PVT variations without any off-chip calibration or tuning.
Theory and Implementation of an Analog-to-Information Converter using Random Demodulation The new theory of compressive sensing enables direct analog-to-information conversion of compressible signals at sub-Nyquist acquisition rates. The authors develop new theory, algorithms, performance bounds, and a prototype implementation for an analog-to-information converter based on random demodulation. The architecture is particularly apropos for wideband signals that are sparse in the time-frequency plane. End-to-end simulations of a complete transistor-level implementation prove the concept under the effect of circuit nonidealities.
Ultra-High Input Impedance, Low Noise Integrated Amplifier for Noncontact Biopotential Sensing Noncontact electrocardiogram/electroencephalogram/electromyogram electrodes, which operate primarily through capacitive coupling, have been extensively studied for unobtrusive physiological monitoring. Previous implementations using discrete off-the-shelf amplifiers have been encumbered by the need for manually tuned input capacitance neutralization networks and complex dc-biasing schemes. We have designed and fabricated a custom integrated noncontact sensor front-end amplifier that fully bootstraps internal and external parasitic impedances. DC stability without the need for external large valued resistances is ensured by an ac bootstrapped, low-leakage, on-chip biasing network. The amplifier achieves, without neutralization, input impedance of 60 fF 50 T , input referred noise of 0.05 fA/ and 200 nV/ at 1 Hz, and power consumption of 1.5 A per channel at 3.3 V supply voltage. Stable frequency response is demonstrated below 0.05 Hz with electrode coupling capacitances as low as 0.5 pF.
A high input impedance low-noise instrumentaion amplifier with JFET input This paper presents a high input impedance instrumentation amplifier with low-noise low-power operation. JFET input-pair is employed instead of CMOS to significantly reduce the flicker noise. This amplifier features high input impedance (15.3 GΩ∥1.39 pF) by using current feedback technique and JFET input. This amplifier has a mid-band gain of 39.9 dB, and draws 3.65 μA from a 2.8-V supply and exhibits an input-referred noise of 3.81 μVrms integrated from 10 mHz to 100 kHz, corresponding to a noise efficiency factor (NEF) of 3.23.
A 12-Bit 20-kS/s 640-nW SAR ADC With a VCDL-Based Open-Loop Time-Domain Comparator This brief presents a 12-bit ultra-low-power asynchronous successive approximation register (SAR) analog-to-digital converter (ADC). A voltage-controlled delay line (VCDL) based open-loop time-domain comparator is proposed and analyzed, achieving low noise and ultra-low power performance. By employing the mixed switching scheme, the segmented capacitive digital-to-analog converter (CDAC) arrays as well as the synchronous data-weighted averaging (DWA) calibration block, the proposed SAR ADC can operate from 1.8 V down to 0.8 V at 20–200 kS/s. The designed ADC is fabricated in a 0.18- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu {\mathrm{ m}}$ </tex-math></inline-formula> CMOS process and the measurement results show the proposed SAR ADC achieves an SNDR of 65-dB with power consumption of 647 nW from a 0.8 V power supply at 20 kS/s.
An Energy-Efficient SAR ADC With Event-Triggered Error Correction This brief presents an energy-efficient fully differential 10-bit successive approximation register (SAR) analog-to-digital converter (ADC) with a resolution of 10 bits and a sampling rate of 320 kS/s. The optimal capacitor split and bypass number is analyzed to achieve the highest switching energy efficiency. The common-mode voltage level remains constant during the MSB-capacitor switching cycles. To minimize nonlinearity due to charge averaging voltage offset or DAC array mismatch, an event-triggered error correction method is employed as a redundant cycle for detecting digital code errors within 1 least significant bit (LSB). A test chip was fabricated using the 180-nm CMOS process and occupied a 0.0564-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> core area. Under a regular 0.65-V supply voltage, the ADC achieved an effective number of bits of 9.61 bits and a figure of merit (FOM) of 6.38 fJ/conversion step, with 1.6- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${ \mu }\text{W}$ </tex-math></inline-formula> power dissipation for a low-frequency input. The measured differential and integral nonlinearity results are within 0.30 LSB and 0.43 LSB, respectively.
A 1V 3.5 μW Bio-AFE With Chopper-Capacitor-Chopper Integrator-Based DSL and Low Power GM-C Filter This brief presents a low-noise, low-power bio-signal acquisition analog front-end (Bio-AFE). It mainly includes a capacitively coupled chopper-stabilized instrumentation amplifier (CCIA), a programmable gain amplifier (PGA), a low-pass filter (LPF), and a successive approximation analog to digital converter (SAR ADC). A chopper-capacitor-chopper integrator based DC servo loop (C3IB-DSL...
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
Building efficient wireless sensor networks with low-level naming In most distributed systems, naming of nodes for low-level communication leverages topological location (such as node addresses) and is independent of any application. In this paper, we investigate an emerging class of distributed systems where low-level communication does not rely on network topological location. Rather, low-level communication is based on attributes that are external to the network topology and relevant to the application. When combined with dense deployment of nodes, this kind of named data enables in-network processing for data aggregation, collaborative signal processing, and similar problems. These approaches are essential for emerging applications such as sensor networks where resources such as bandwidth and energy are limited. This paper is the first description of the software architecture that supports named data and in-network processing in an operational, multi-application sensor-network. We show that approaches such as in-network aggregation and nested queries can significantly affect network traffic. In one experiment aggregation reduces traffic by up to 42% and nested queries reduce loss rates by 30%. Although aggregation has been previously studied in simulation, this paper demonstrates nested queries as another form of in-network processing, and it presents the first evaluation of these approaches over an operational testbed.
Formal verification in hardware design: a survey In recent years, formal methods have emerged as an alternative approach to ensuring the quality and correctness of hardware designs, overcoming some of the limitations of traditional validation techniques such as simulation and testing.There are two main aspects to the application of formal methods in a design process: the formal framework used to specify desired properties of a design and the verification techniques and tools used to reason about the relationship between a specification and a corresponding implementation. We survey a variety of frameworks and techniques proposed in the literature and applied to actual designs. The specification frameworks we describe include temporal logics, predicate logic, abstraction and refinement, as well as containment between &ohgr;-regular languages. The verification techniques presented include model checking, automata-theoretic techniques, automated theorem proving, and approaches that integrate the above methods.In order to provide insight into the scope and limitations of currently available techniques, we present a selection of case studies where formal methods were applied to industrial-scale designs, such as microprocessors, floating-point hardware, protocols, memory subsystems, and communications hardware.
Exploiting availability prediction in distributed systems Loosely-coupled distributed systems have significant scale and cost advantages over more traditional architectures, but the availability of the nodes in these systems varies widely. Availability modeling is crucial for predicting per-machine resource burdens and understanding emergent, system-wide phenomena. We present new techniques for predicting availability and test them using traces taken from three distributed systems. We then describe three applications of availability prediction. The first, availability-guided replica placement, reduces object copying in a distributed data store while increasing data availability. The second shows how availability prediction can improve routing in delay-tolerant networks. The third combines availability prediction with virus modeling to improve forecasts of global infection dynamics.
Chameleon: a dual-mode 802.11b/Bluetooth receiver system design In this paper, an approach to map the Bluetooth and 802.11b standards specifications into an architecture and specifications for the building blocks of a dual-mode direct conversion receiver is proposed. The design procedure focuses on optimizing the performance in each operating mode while attaining an efficient dual-standard solution. The impact of the expected receiver nonidealities and the characteristics of each building block are evaluated through bit-error-rate simulations. The proposed receiver design is verified through a fully integrated implementation from low-noise amplifier to analog-to-digital converter using IBM 0.25-μm BiCMOS technology. Experimental results from the integrated prototype meet the specifications from both standards and are in good agreement with the target sensitivity.
An efficient low-cost fixed-point digital down converter with modified filter bank In radar system, as the most important part of IF radar receiver, digital down converter (DDC) extracts the baseband signal needed from modulated IF signal, and down-samples the signal with decimation factor of 20. This paper proposes an efficient low-cost structure of DDC, including NCO, mixer and a modified filter bank. The modified filter bank adopts a high-efficiency structure, including a 5-stage CIC filter, a 9-tap CFIR filter and a 15-tap HB filter, which reduces the complexity and cost of implementation compared with the traditional filter bank. Then an optimized fixed-point programming is designed in order to implement DDC on fixed-point DSP or FPGA. The simulation results show that the proposed DDC achieves an expectant specification in application of IF radar receiver.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.2
0.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
Conditional Speculation: An Effective Approach to Safeguard Out-of-Order Execution Against Spectre Attacks Speculative execution side-channel vulnerabilities such as Spectre reveal that conventional architecture designs lack security consideration. This paper proposes a software transparent defense mechanism, named as Conditional Speculation, against Spectre vulnerabilities found on traditional out-of-order microprocessors. It introduces the concept of security dependence to mark speculative memory instructions which could leak information with potential security risk. More specifically, security-dependent instructions are detected and marked with suspect speculation flags in the Issue Queue. All the instructions can be speculatively issued for execution in accordance with the classic out-of-order pipeline. For those instructions with suspect speculation flags, they are considered as safe instructions if their speculative execution will not refill new cache lines with unauthorized privilege data. Otherwise, they are considered as unsafe instructions and thus not allowed to execute speculatively. To reduce the performance impact from not executing unsafe instructions speculatively, we investigate two filtering mechanisms, Cachehit based Hazard Filter and Trusted Page Buffer based Hazard Filter to filter out false security hazards. Our design philosophy is to speculatively execute safe instructions to maintain the performance benefits of out-of-order execution while blocking the speculative execution of unsafe instructions for security consideration. We evaluate Conditional Speculation in terms of performance, security and area. The experimental results show that the hardware overhead is marginal and the performance overhead is minimal.
The equational theory of pomsets Pomsets have been introduced as a model of concurrency. Since a pomset is a string in which the total order has been relaxed to be a partial order, in this paper we view them as a generalization of strings, and investigate their algebraic properties. In particular, we investigate the axiomatic properties of pomsets, sets of pomsets and ideals of pomsets, under such operations as concatenation, parallel composition, union and their associated closure operations. We find that the equational theory of sets, pomsets under concatenation, parallel composition and union is finitely axiomatizable, whereas the theory of languages under the analogous operations is not. A similar result is obtained for ideals of pomsets, which incorporate the notion of subsumption which is also known as augmentation. Finally, we show that the addition of any closure operation (parallel or serial) leads to nonfinite axiomatizability of the resulting equational theory.
Codejail: Application-Transparent Isolation of Libraries with Tight Program Interactions.
Speculator: a tool to analyze speculative execution attacks and mitigations Speculative execution attacks exploit vulnerabilities at a CPU's microarchitectural level, which, until recently, remained hidden below the instruction set architecture, largely undocumented by CPU vendors. New speculative execution attacks are released on a monthly basis, showing how aspects of the so-far unexplored microarchitectural attack surface can be exploited. In this paper, we introduce, Speculator, a new tool to investigate these new microarchitectural attacks and their mitigations, which aims to be the GDB of speculative execution. Using speculative execution markers, set of instructions that we found are observable through performance counters during CPU speculation, Speculator can study microarchitectural behavior of single snippets of code, or more complex attacker and victim scenarios (e.g. Branch Target Injection (BTI) attacks). We also present our findings on multiple CPU platforms showing the precision and the flexibility offered by Speculator and its templates.
FaCT: a DSL for timing-sensitive computation Real-world cryptographic code is often written in a subset of C intended to execute in constant-time, thereby avoiding timing side channel vulnerabilities. This C subset eschews structured programming as we know it: if-statements, looping constructs, and procedural abstractions can leak timing information when handling sensitive data. The resulting obfuscation has led to subtle bugs, even in widely-used high-profile libraries like OpenSSL. To address the challenge of writing constant-time cryptographic code, we present FaCT, a crypto DSL that provides high-level but safe language constructs. The FaCT compiler uses a secrecy type system to automatically transform potentially timing-sensitive high-level code into low-level, constant-time LLVM bitcode. We develop the language and type system, formalize the constant-time transformation, and present an empirical evaluation that uses FaCT to implement core crypto routines from several open-source projects including OpenSSL, libsodium, and curve25519-donna. Our evaluation shows that FaCT’s design makes it possible to write readable, high-level cryptographic code, with efficient, constant-time behavior.
Binsec/Rel: Efficient Relational Symbolic Execution for Constant-Time at Binary-Level The constant-time programming discipline (CT) is an efficient countermeasure against timing side-channel attacks, requiring the control flow and the memory accesses to be independent from the secrets. Yet, writing CT code is challenging as it demands to reason about pairs of execution traces (2-hypersafety property) and it is generally not preserved by the compiler, requiring binary-level analysis. Unfortunately, current verification tools for CT either reason at higher level (C or LLVM), or sacrifice bug-finding or bounded-verification, or do not scale. We tackle the problem of designing an efficient binary-level verification tool for CT providing both bug-finding and bounded-verification. The technique builds on relational symbolic execution enhanced with new optimizations dedicated to information flow and binary-level analysis, yielding a dramatic improvement over prior work based on symbolic execution. We implement a prototype, BINSEC/REL, and perform extensive experiments on a set of 338 cryptographic implementations, demonstrating the benefits of our approach in both bug-finding and bounded-verification. Using BINSEC/REL, we also automate a previous manual study of CT preservation by compilers. Interestingly, we discovered that gcc -O0 and backend passes of clang introduce violations of CT in implementations that were previously deemed secure by a state-of-the-art CT verification tool operating at LLVM level, showing the importance of reasoning at binary-level.
SafeSpec: Banishing the Spectre of a Meltdown with Leakage-Free Speculation. Speculative attacks, such as Spectre and Meltdown, target speculative execution to access privileged data and leak it through a side-channel. In this paper, we introduce (SafeSpec), a new model for supporting speculation in a way that is immune to the side-channel leakage by storing side effects of speculative instructions in separate structures until they commit. Additionally, we address the possibility of a covert channel from speculative instructions to committed instructions before these instructions are committed. We develop a cycle accurate model of modified design of an x86-64 processor and show that the performance impact is negligible.
Information-driven dynamic sensor collaboration This article overviews the information-driven approach to sensor collaboration in ad hoc sensor networks. The main idea is for a network to determine participants in a "sensor collaboration" by dynamically optimizing the information utility of data for a given cost of communication and computation. A definition of information utility is introduced, and several approximate measures of the information utility are developed for reasons of computational tractability. We illustrate the use of this approach using examples drawn from tracking applications
Randomized gossip algorithms Motivated by applications to sensor, peer-to-peer, and ad hoc networks, we study distributed algorithms, also known as gossip algorithms, for exchanging information and for computing in an arbitrarily connected network of nodes. The topology of such networks changes continuously as new nodes join and old nodes leave the network. Algorithms for such networks need to be robust against changes in topology. Additionally, nodes in sensor networks operate under limited computational, communication, and energy resources. These constraints have motivated the design of "gossip" algorithms: schemes which distribute the computational burden and in which a node communicates with a randomly chosen neighbor. We analyze the averaging problem under the gossip constraint for an arbitrary network graph, and find that the averaging time of a gossip algorithm depends on the second largest eigenvalue of a doubly stochastic matrix characterizing the algorithm. Designing the fastest gossip algorithm corresponds to minimizing this eigenvalue, which is a semidefinite program (SDP). In general, SDPs cannot be solved in a distributed fashion; however, exploiting problem structure, we propose a distributed subgradient method that solves the optimization problem over the network. The relation of averaging time to the second largest eigenvalue naturally relates it to the mixing time of a random walk with transition probabilities derived from the gossip algorithm. We use this connection to study the performance and scaling of gossip algorithms on two popular networks: Wireless Sensor Networks, which are modeled as Geometric Random Graphs, and the Internet graph under the so-called Preferential Connectivity (PC) model.
The software radio architecture As communications technology continues its rapid transition from analog to digital, more functions of contemporary radio systems are implemented in software, leading toward the software radio. This article provides a tutorial review of software radio architectures and technology, highlighting benefits, pitfalls, and lessons learned. This includes a closer look at the canonical functional partitioning of channel coding into antenna, RF, IF, baseband, and bitstream segments. A more detailed look at the estimation of demand for critical resources is key. This leads to a discussion of affordable hardware configurations, the mapping of functions to component hardware, and related software tools. This article then concludes with a brief treatment of the economics and likely future directions of software radio technology
A Digital Requantizer With Shaped Requantization Noise That Remains Well Behaved After Nonlinear Distortion A major problem in oversampling digital-to-analog converters and fractional-N frequency synthesizers, which are ubiquitous in modern communication systems, is that the noise they introduce contains spurious tones. The spurious tones are the result of digitally generated, quantized signals passing through nonlinear analog components. This paper presents a new method of digital requantization called successive requantization, special cases of which avoids the spurious tone generation problem. Sufficient conditions are derived that ensure certain statistical properties of the quantization noise, including the absence of spurious tones after nonlinear distortion. A practical example is presented and shown to satisfy these conditions.
Simulation knowledge extraction and reuse in constrained random processor verification This work proposes a methodology of knowledge extraction from constrained-random simulation data. Feature-based analysis is employed to extract rules describing the unique properties of novel assembly programs hitting special conditions. The knowledge learned can be reused to guide constrained-random test generation towards uncovered corners. The experiments are conducted based on the verification environment of a commercial processor design, in parallel with the on-going verification efforts. The experimental results show that by leveraging the knowledge extracted from constrained-random simulation, we can improve the test templates to activate the assertions that otherwise are difficult to activate by extensive simulation.
Software Defined Integrated RF Frontend Receiver Design.
Power Efficiency Comparison of Event-Driven and Fixed-Rate Signal Conversion and Compression for Biomedical Applications Energy-constrained biomedical recording systems need power-efficient data converters and good signal compression in order to meet the stringent power consumption requirements of many applications. In literature today, typically a SAR ADC in combination with digital compression is used. Recently, alternative event-driven sampling techniques have been proposed that incorporate compression in the ADC, such as level-crossing A/D conversion. This paper describes the power efficiency analysis of such level-crossing ADC (LCADC) and the traditional fixed-rate SAR ADC with simple compression. A model for the power consumption of the LCADC is derived, which is then compared to the power consumption of the SAR ADC with zero-order hold (ZOH) compression for multiple biosignals (ECG, EMG, EEG, and EAP). The LCADC is more power efficient than the SAR ADC up to a cross-over point in quantizer resolution (for example 8 bits for an EEG signal). This cross-over point decreases with the ratio of the maximum to average slope in the signal of the application. It also changes with the technology and design techniques used. The LCADC is thus suited for low to medium resolution applications. In addition, the event-driven operation of an LCADC results in fewer data to be transmitted in a system application. The event-driven LCADC without timer and with single-bit quantizer achieves a reduction in power consumption at system level of two orders of magnitude, an order of magnitude better than the SAR ADC with ZOH compression. At system level, the LCADC thus offers a big advantage over the SAR ADC.
1.2
0.2
0.2
0.2
0.2
0.1
0.04
0
0
0
0
0
0
0
3-10 GHz noise-cancelling CMOS LNA using g m -boosting technique. An ultra-wideband (UWB) low-noise amplifier (LNA) using a 0.11 µm CMOS technology is proposed. The common-gate (CG) input stage for wideband input impedance matching and the common-source (CS) stage for noise cancelling are applied. In the proposed LNA, the current of the CG input stage can be significantly reduced by applying the gm -boosting technique using the noise-cancelling CS stage without additional amplifier, and the noise performance can be improved at the same power consumption. For low-power operation, the LNA consumes 2.9 mW and achieves a noise figure (NF) of S 21 between 16.5 and 17.6 dB at S 11, lower than −12.4 and 3.6–3.7 dB at frequencies of 3–10 GHz. In low-noise operation, the LNA consumes 8.3 mW, achieving S 11 of less than −10.7 dB, S 21 of 17.5–18.7 dB, and NF of 2.4–2.9 dB.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Design of UWB LNA in 45nm CMOS technology: Planar bulk vs. FinFET This paper describes the design of a single-stage differential low noise amplifier (LNA) for ultra wide band(UWB) applications, implemented in state of the art Planar and FinFET 45 nm CMOS technologies. A gm-boosted topology has been chosen and the LNA has been designed to work over the whole UWB band (3.1 - 10.6 GHz), while driving a capacitive load. The simulations highlight that, at the present stage of the technology development, the Planar version of the LNA outperforms the FinFET one thanks to the superior cutoff frequency fT of Planar devices in the inversion region, achieving comparable Noise Figure and voltage gain, but consuming less power.
A Fully Differential Band-Selective Low-Noise Amplifier for MB-OFDM UWB Receivers A band-selective low-noise amplifier (BS-LNA) for multiband orthogonal frequency-division multiplexing ultra-wide-band (UWB) receivers is presented. A switched capacitive network that controls the resonant frequency of the LC load for the band selection is used. It greatly enhances the gain and noise performance of the LNA in each frequency band without increasing power consumption. Moreover, a fu...
Bandwidth Extension Techniques for CMOS Amplifiers Inductive-peaking-based bandwidth extension techniques for CMOS amplifiers in wireless and wireline applications are presented. To overcome the conventional limits on bandwidth extension ratios, these techniques augment inductive peaking using capacitive splitting and magnetic coupling. It is shown that a critical design constraint for optimum bandwidth extension is the ratio of the drain capacita...
A 1.5-V, 1.5-GHz CMOS low noise amplifier A 1.5-GHz low noise amplifier (LNA), intended for use in a global positioning system (GPS) receiver, has been implemented in a standard 0.6-/spl mu/m CMOS process. The amplifier provides a forward gain (S21) of 22 dB with a noise figure of only 3.5 dB while drawing 30 mW from a 1.5 V supply. In this paper, we present a detailed analysis of the LNA architecture, including a discussion on the effect...
Desensitized CMOS Low-Noise Amplifiers The minimum attainable noise figure for scaled- CMOS low-noise amplifiers (LNAs) is limited by impedance mismatches such as the well-known noise/power tradeoff. In this paper, we show that a power-constrained optimization of the device noise resistance parameter, Rn, significantly reduces the impact of mismatches and variations and leads to an almost simultaneous noise and power match. This process, called desensitization, makes the design largely immune to measurement and modeling errors and manufacturing variations, and significantly reduces frequency-dependent noise mismatches in wide-band LNAs. Measured data from devices and desensitized LNAs designed on 180-nm and 90-nm CMOS processes shows that: (1) a device size selected for optimum Rnmiddot is less sensitive to source impedance mismatches and provides a wide-band noise match; and (2) LNAs approach a simultaneous input and noise match, and exhibit significant improvements (ges 2x) in their wide-band noise performance.
A Low-Power, Linearized, Ultra-Wideband LNA Design Technique This work proposes a practical linearization technique for high-frequency wideband applications using an active nonlinear resistor, and analyzes its performance with Volterra series. The linearization technique is applied to an ultra-wideband (UWB) cascode common gate Low Noise Amplifier (CG-LNA), and two additional reference designs are implemented to evaluate the linearization technique - a stan...
Max-Min D-Cluster Formation in Wireless Ad Hoc Networks An ad hoc network may be logically represented as a set of clusters. The clusterheads form a -hop dominating set. Each node is at most hops from a clusterhead. Clusterheads form a virtual backbone and may be used to route packets for nodes in their cluster. Previous heuristics restricted themselves to -hop clusters. We show that the minimum -hop dominating set problem is NP-complete. Then we present a heuristic to form -clusters in a wireless ad hoc network. Nodes are assumed to have non-deterministic mobility pattern. Clusters are formed by diffusing node identities along the wireless links. When the heuristic terminates, a node either becomes a clusterhead, or is at most wireless hops away from its clusterhead. The value of is a parameter of the heuristic. The heuristic can be run either at regular intervals, or whenever the network configura- tion changes. One of the features of the heuristic is that it t ends to re-elect existing clusterheads even when the network configuration c hanges. This helps to reduce the communication overheads during transition from old clusterheads to new clusterheads. Also, there is a tendency to evenly dis- tribute the mobile nodes among the clusterheads, and evently distribute the responsibility of acting as clusterheads among all nodes. Thus, the heuristic is fair and stable. Simulation experiments demonstrate that the proposed heuristic is better than the two earlier heuristics, namely the LCA (1) and Degree based (11) solutions.
Measurement issues in galvanic intrabody communication: influence of experimental setup Significance: The need for increasingly energyefficient and miniaturized bio-devices for ubiquitous health monitoring has paved the way for considerable advances in the investigation of techniques such as intrabody communication (IBC), which uses human tissues as a transmission medium. However, IBC still poses technical challenges regarding the measurement of the actual gain through the human body. The heterogeneity of experimental setups and conditions used together with the inherent uncertainty caused by the human body make the measurement process even more difficult. Goal: The objective of this work, focused on galvanic coupling IBC, is to study the influence of different measurement equipments and conditions on the IBC channel. Methods: different experimental setups have been proposed in order to analyze key issues such as grounding, load resistance, type of measurement device and effect of cables. In order to avoid the uncertainty caused by the human body, an IBC electric circuit phantom mimicking both human bioimpedance and gain has been designed. Given the low-frequency operation of galvanic coupling, a frequency range between 10 kHz and 1 MHz has been selected. Results: the correspondence between simulated and experimental results obtained with the electric phantom have allowed us to discriminate the effects caused by the measurement equipment. Conclusion: this study has helped us obtain useful considerations about optimal setups for galvanic-type IBC as well as to identify some of the main causes of discrepancy in the literature.
On the minimal synchronism needed for distributed consensus Reaching agreement is a primitive of distributed computing. While this poses no problem in an ideal, failure-free environment, it imposes certain constraints on the capabilities of an actual system: a system is viable only if it permits the existence of consensus protocols tolerant to some number of failures. Fischer, Lynch and Paterson [FLP] have shown that in a completely asynchronous model, even one failure cannot be tolerated. In this paper we extend their work, identifying several critical system parameters, including various synchronicity conditions, and examine how varying these affects the number of faults which can be tolerated. Our proofs expose general heuristic principles that explain why consensus is possible in certain models but not possible in others.
Towards a higher-order synchronous data-flow language The paper introduces a higher-order synchronous data-flow language in which communication channels may themselves transport programs. This provides a mean to dynamically reconfigure data-flow processes. The language comes as a natural and strict extension of both lustre and lucy. This extension is conservative, in the sense that a first-order restriction of the language can receive the same semantics.We illustrate the expressivity of the language with some examples, before giving the formal semantics of the underlying calculus. The language is equipped with a polymorphic type system allowing types to be automatically inferred and a clock calculus rejecting programs for which synchronous execution cannot be statically guaranteed. To our knowledge, this is the first higher-order synchronous data-flow language where stream functions are first class citizens.
MicroGP—An Evolutionary Assembly Program Generator This paper describes 驴GP, an evolutionary approach for generating assembly programs tuned for a specific microprocessor. The approach is based on three clearly separated blocks: an evolutionary core, an instruction library and an external evaluator. The evolutionary core conducts adaptive population-based search. The instruction library is used to map individuals to valid assembly language programs. The external evaluator simulates the assembly program, providing the necessary feedback to the evolutionary core. 驴GP has some distinctive features that allow its use in specific contexts. This paper focuses on one such context: test program generation for design validation of microprocessors. Reported results show 驴GP being used to validate a complex 5-stage pipelined microprocessor. Its induced test programs outperform an exhaustive functional test and an instruction randomizer, showing that engineers are able to automatically obtain high-quality test programs.
Design of ultra-wide-load, high-efficient DC-DC buck converters The paper presents the design of a current-mode control DC-DC buck converter with pulse-width modulation (PWM) mode. The converter achieves a current load ranged from 50 mA to 500 mA over 90% efficiency, and the maximum power efficiency is 95.6%, where the circuit was simulated with the TSMC 0.35 um CMOS process. In order to achieve ultra-wide-load high efficiency, this paper implements with two PMOS transistors as switches. Results show that the converter achieves above 90% efficiency at the range from 30 mA to 1200 mA with a maximum efficiency of 96.36%. Results show that, with the additional switch transistor, the current load range is expanded more than double. With two PMOS transistors, the proposed converter can also achieve 3 different load ranges so that it can be programmed for the applications which are operated at those three different load ranges.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.222
0.111
0.016062
0.011698
0.002344
0.000085
0
0
0
0
0
0
0
0
A CMOS Current-Mode Magnetic Hall Sensor With Integrated Front-End A Hall magnetic sensor working in the current domain and its electronic interface are presented. The paper describes the physical sensor design and implementation in a standard CMOS technology, the transistor level design of its high sensitive front-end together with the sensor experimental characterization. The current-mode Hall sensor and the analog readout circuit have been fabricated using a 0.18- CMOS technology. The sensor uses the current spinning technique to compensate for the offset and provides a differential current as an output signal. The measured sensor power consumption and residual offset are 120 and 50 , respectively.
High resolution, low offset Vertical Hall device in low-voltage CMOS technology Vertical Hall-effect devices (VHDs) are CMOS integrated sensors dedicated to the measurement of magnetic field in the plane of the chip. At low frequency performances are severely reduced by the 1/f noise. We recently proposed a theoretical study which confirm the capability of the spinning current technique to lower the 1/f noise on Low-Voltage VHD. In this paper, we proposed a practical way for the implementation of this technique. Experimental results bring out significant improvements. An offset of 0.1 mT and a resolution of 37 μT has been measured over a 1.6 kHz bandwidth.
A highly sensitive CMOS digital Hall sensor for low magnetic field applications. Integrated CMOS Hall sensors have been widely used to measure magnetic fields. However, they are difficult to work with in a low magnetic field environment due to their low sensitivity and large offset. This paper describes a highly sensitive digital Hall sensor fabricated in 0.18 mu m high voltage CMOS technology for low field applications. The sensor consists of a switched cross-shaped Hall plate and a novel signal conditioner. It effectively eliminates offset and low frequency 1/f noise by applying a dynamic quadrature offset cancellation technique. The measured results show the optimal Hall plate achieves a high current related sensitivity of about 310 V/AT. The whole sensor has a remarkable ability to measure a minimum +/- 2 mT magnetic field and output a digital Hall signal in a wide temperature range from -40 degrees C to 120 degrees C.
Low Power CMOS-Based Hall Sensor with Simple Structure Using Double-Sampling Delta-Sigma ADC. A CMOS (Complementary metal-oxide-semiconductor) Hall sensor with low power consumption and simple structure is introduced. The tiny magnetic signal from Hall device could be detected by a high-resolution delta-sigma ADC in presence of offset and flickering noise. Also, the offset as well as the flickering noise are effectively suppressed by the current spinning technique combined with double sampling switches of the ADC. The double sampling scheme of the ADC reduces the operating frequency and helps to reduce the power consumption. The prototype Hall sensor is fabricated in a 0.18-mu m CMOS process, and the measurement shows detection range of +/- 150 mT and sensitivity of 110 mu V/mT. The size of active area is 0.7 mm(2), and the total power consumption is 4.9 mW. The proposed system is advantageous not only for low power consumption, but also for small sensor size due to its simplicity.
A continuous-time ripple reduction technique for spinning-current Hall sensors The intrinsic offset of Hall sensors can be reduced with the help of the spinning-current technique, which modulates this offset away from the signal band. The resulting offset ripple can then be removed by a low-pass filter, which, however, limits the sensor's bandwidth. This paper presents a ripple-reduction technique that does not require a low-pass filter. Measurements on a Hall sensor system implemented in a 0.18μm CMOS process show that the technique can reduce the residual ripple by at least 40dB - to the same level as the sensor's noise.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Directed diffusion: a scalable and robust communication paradigm for sensor networks Advances in processor, memory and radio technology will enable small and cheap nodes capable of sensing, communication and computation. Networks of such nodes can coordinate to perform distributed sensing of environmental phenomena. In this paper, we explore the directed diffusion paradigm for such coordination. Directed diffusion is datacentric in that all communication is for named data. All nodes in a directed diffusion-based network are application-aware. This enables diffusion to achieve energy savings by selecting empirically good paths and by caching and processing data in-network. We explore and evaluate the use of directed diffusion for a simple remote-surveillance sensor network.
ImageNet Large Scale Visual Recognition Challenge. The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.
DieHard: probabilistic memory safety for unsafe languages Applications written in unsafe languages like C and C++ are vulnerable to memory errors such as buffer overflows, dangling pointers, and reads of uninitialized data. Such errors can lead to program crashes, security vulnerabilities, and unpredictable behavior. We present DieHard, a runtime system that tolerates these errors while probabilistically maintaining soundness. DieHard uses randomization and replication to achieve probabilistic memory safety by approximating an infinite-sized heap. DieHard's memory manager randomizes the location of objects in a heap that is at least twice as large as required. This algorithm prevents heap corruption and provides a probabilistic guarantee of avoiding memory errors. For additional safety, DieHard can operate in a replicated mode where multiple replicas of the same application are run simultaneously. By initializing each replica with a different random seed and requiring agreement on output, the replicated version of Die-Hard increases the likelihood of correct execution because errors are unlikely to have the same effect across all replicas. We present analytical and experimental results that show DieHard's resilience to a wide range of memory errors, including a heap-based buffer overflow in an actual application.
Beyond Stack Smashing: Recent Advances in Exploiting Buffer Overruns This article describes three powerful general-purpose families of exploits for buffer overruns: arc injection, pointer subterfuge, and heap smashing. These new techniques go beyond the traditional "stack smashing" attack and invalidate traditional assumptions about buffer overruns.
Fuzzy regulators and fuzzy observers: relaxed stability conditions and LMI-based designs This paper presents new relaxed stability conditions and LMI- (linear matrix inequality) based designs for both continuous and discrete fuzzy control systems. They are applied to design problems of fuzzy regulators and fuzzy observers. First, Takagi and Sugeno's fuzzy models and some stability results are recalled. To design fuzzy regulators and fuzzy observers, nonlinear systems are represented by Takagi-Sugeno's (TS) fuzzy models. The concept of parallel distributed compensation is employed to design fuzzy regulators and fuzzy observers from the TS fuzzy models. New stability conditions are obtained by relaxing the stability conditions derived in previous papers, LMI-based design procedures for fuzzy regulators and fuzzy observers are constructed using the parallel distributed compensation and the relaxed stability conditions. Other LMI's with respect to decay rate and constraints on control input and output are also derived and utilized in the design procedures. Design examples for nonlinear systems demonstrate the utility of the relaxed stability conditions and the LMI-based design procedures
The accelerator store: A shared memory framework for accelerator-based systems This paper presents the many-accelerator architecture, a design approach combining the scalability of homogeneous multi-core architectures and system-on-chip's high performance and power-efficient hardware accelerators. In preparation for systems containing tens or hundreds of accelerators, we characterize a diverse pool of accelerators and find each contains significant amounts of SRAM memory (up to 90&percnt; of their area). We take advantage of this discovery and introduce the accelerator store, a scalable architectural component to minimize accelerator area by sharing its memories between accelerators. We evaluate the accelerator store for two applications and find significant system area reductions (30&percnt;) in exchange for small overheads (2&percnt; performance, 0&percnt;--8&percnt; energy). The paper also identifies new research directions enabled by the accelerator store and the many-accelerator architecture.
Optimum insertion/deletion point selection for fractional sample rate conversion In this paper, an optimum insertion/deletion point selection algorithm for fractional sample rate conversion (SRC) is proposed. The direct insertion/deletion technique achieves low complexity and low power consumption as compared to the other fractional SRC methods. Using a multiple set insertion/deletion technique is efficient for reduction of distortion caused by the insertion/deletion step. When the conversion factor is (N ±¿)/N, the number of possible patterns of insertion/deletion points and the number of combinations for multiple set inserters/deleters grow as ¿ increases. The proposed algorithm minimizes the distortion due to SRC by selecting the patterns and the combinations.
Multi-Channel Neural Recording Implants: A Review. The recently growing progress in neuroscience research and relevant achievements, as well as advancements in the fabrication process, have increased the demand for neural interfacing systems. Brain-machine interfaces (BMIs) have been revealed to be a promising method for the diagnosis and treatment of neurological disorders and the restoration of sensory and motor function. Neural recording implants, as a part of BMI, are capable of capturing brain signals, and amplifying, digitizing, and transferring them outside of the body with a transmitter. The main challenges of designing such implants are minimizing power consumption and the silicon area. In this paper, multi-channel neural recording implants are surveyed. After presenting various neural-signal features, we investigate main available neural recording circuit and system architectures. The fundamental blocks of available architectures, such as neural amplifiers, analog to digital converters (ADCs) and compression blocks, are explored. We cover the various topologies of neural amplifiers, provide a comparison, and probe their design challenges. To achieve a relatively high SNR at the output of the neural amplifier, noise reduction techniques are discussed. Also, to transfer neural signals outside of the body, they are digitized using data converters, then in most cases, the data compression is applied to mitigate power consumption. We present the various dedicated ADC structures, as well as an overview of main data compression methods.
1.053764
0.053389
0.051694
0.05
0.017949
0
0
0
0
0
0
0
0
0
Domain-specific hardware accelerators DSAs gain efficiency from specialization and performance from parallelism.
Approximate Computing: A Survey. As one of the most promising energy-efficient computing paradigms, approximate computing has gained a lot of research attention in the past few years. This paper presents a survey of state-of-the-art work in all aspects of approximate computing and highlights future research challenges in this field.
Nagini: A Static Verifier For Python We present Nagini, an automated, modular verifier for statically-typed, concurrent Python 3 programs, built on the Viper verification infrastructure. Combining established concepts with new ideas, Nagini can verify memory safety, functional properties, termination, deadlock freedom, and input/output behavior. Our experiments show that Nagini is able to verify non-trivial properties of real-world Python code.
Layerwise Buffer Voltage Scaling for Energy-Efficient Convolutional Neural Network In order to effectively reduce buffer energy consumption, which constitutes a significant part of the total energy consumption in a convolutional neural network (CNN), it is useful to apply different amounts of energy conservation effort to the different levels of a CNN as the buffer energy to total energy usage ratios can differ quite substantially across the layers of a CNN. This article proposes layerwise buffer voltage scaling as an effective technique for reducing buffer access energy. Error-resilience analysis, including interlayer effects, conducted during design-time is used to determine the specific buffer supply voltage to be used for each layer of a CNN. Then these layer-specific buffer supply voltages are used in the CNN for image classification inference. Error injection experiments with three different types of CNN architectures show that, with this technique, the buffer access energy and overall system energy can be reduced by up to 68.41% and 33.68%, respectively, without sacrificing image classification accuracy.
Hierarchical Approximate Memory for Deep Neural Network Applications Power consumed by a computer memory system can be significantly reduced if a certain level of error is permitted in the data stored in memory. Such an approximate memory approach is viable for use in applications developed using deep neural networks (DNNs) because such applications are typically error-resilient. In this paper, the use of hierarchical approximate memory for DNNs is studied and mode...
Gamma: leveraging Gustavson’s algorithm to accelerate sparse matrix multiplication ABSTRACTSparse matrix-sparse matrix multiplication (spMspM) is at the heart of a wide range of scientific and machine learning applications. spMspM is inefficient on general-purpose architectures, making accelerators attractive. However, prior spMspM accelerators use inner- or outer-product dataflows that suffer poor input or output reuse, leading to high traffic and poor performance. These prior accelerators have not explored Gustavson's algorithm, an alternative spMspM dataflow that does not suffer from these problems but features irregular memory access patterns that prior accelerators do not support. We present GAMMA, an spMspM accelerator that uses Gustavson's algorithm to address the challenges of prior work. GAMMA performs spMspM's computation using specialized processing elements with simple high-radix mergers, and performs many merges in parallel to achieve high throughput. GAMMA uses a novel on-chip storage structure that combines features of both caches and explicitly managed buffers. This structure captures Gustavson's irregular reuse patterns and streams thousands of concurrent sparse fibers (i.e., lists of coordinates and values for rows or columns) with explicitly decoupled data movement. GAMMA features a new dynamic scheduling algorithm to achieve high utilization despite irregularity. We also present new preprocessing algorithms that boost GAMMA's efficiency and versatility. As a result, GAMMA outperforms prior accelerators by gmean 2.1x, and reduces memory traffic by gmean 2.2x and by up to 13x.
OpenCL: A Parallel Programming Standard for Heterogeneous Computing Systems The OpenCL standard offers a common API for program execution on systems composed of different types of computational devices such as multicore CPUs, GPUs, or other accelerators.
TETRIS: Scalable and Efficient Neural Network Acceleration with 3D Memory. The high accuracy of deep neural networks (NNs) has led to the development of NN accelerators that improve performance by two orders of magnitude. However, scaling these accelerators for higher performance with increasingly larger NNs exacerbates the cost and energy overheads of their memory systems, including the on-chip SRAM buffers and the off-chip DRAM channels. This paper presents the hardware architecture and software scheduling and partitioning techniques for TETRIS, a scalable NN accelerator using 3D memory. First, we show that the high throughput and low energy characteristics of 3D memory allow us to rebalance the NN accelerator design, using more area for processing elements and less area for SRAM buffers. Second, we move portions of the NN computations close to the DRAM banks to decrease bandwidth pressure and increase performance and energy efficiency. Third, we show that despite the use of small SRAM buffers, the presence of 3D memory simplifies dataflow scheduling for NN computations. We present an analytical scheduling scheme that matches the efficiency of schedules derived through exhaustive search. Finally, we develop a hybrid partitioning scheme that parallelizes the NN computations over multiple accelerators. Overall, we show that TETRIS improves mthe performance by 4.1x and reduces the energy by 1.5x over NN accelerators with conventional, low-power DRAM memory systems.
A 12 bit 2.9 GS/s DAC With IM3 $ ≪ -$ 60 dBc Beyond 1 GHz in 65 nm CMOS A 12 bit 2.9 GS/s current-steering DAC implemented in 65 nm CMOS is presented, with an IM3 < ¿-60 dBc beyond 1 GHz while driving a 50 ¿ load with an output swing of 2.5 Vppd and dissipating a power of 188 mW. The SFDR measured at 2.9  GS/s is better than 60 dB beyond 340 MHz while the SFDR measured at 1.6 GS/s is better than 60 dB beyond 440 MHz. The increase in performance at high-frequencies, co...
Local and global properties in networks of processors (Extended Abstract) This paper attempts to get at some of the fundamental properties of distributed computing by means of the following question: “How much does each processor in a network of processors need to know about its own identity, the identities of other processors, and the underlying connection network in order for the network to be able to carry out useful functions?” The approach we take is to require that the processors be designed without any knowledge (or only very broad knowledge) of the networks they are to be used in, and furthermore, that all processors with the same number of communication ports be identical. Given a particular network function, e.g., setting up a spanning tree, we ask whether processors may be designed so that when they are embedded in any connected network and started in some initial configuration, they are guaranteed to accomplish the desired function.
A dynamic analysis of the Dickson charge pump circuit Dynamics of the Dickson charge pump circuit are analyzed. The analytical results enable the estimation of the rise time of the output voltage and that of the power consumption during boosting. By using this analysis, the optimum number of stages to minimize the rise time has been estimated as 1.4 N/sub min/, where N/sub min/ is the minimum value of the number of stages necessary for a given parame...
Prediction of the Spectrum of a Digital Delta–Sigma Modulator Followed by a Polynomial Nonlinearity This paper presents a mathematical analysis of the power spectral density of the output of a nonlinear block driven by a digital delta-sigma modulator. The nonlinearity is a memoryless third-order polynomial with real coefficients. The analysis yields expressions that predict the noise floor caused by the nonlinearity when the input is constant.
A 1.95 GHz Fully Integrated Envelope Elimination and Restoration CMOS Power Amplifier Using Timing Alignment Technique for WCDMA and LTE A fully integrated envelope elimination and restoration (EER) CMOS power amplifier (PA) has been developed for WCDMA and LTE handsets. EER is a supply modulation technique that first divides modulated RF signal into envelope and phase signals and then restores it at a switching PA output. Supply voltage of the switching PA is modulated by the envelope signal through a high-speed supply modulator. EER PA is highly efficient due to the switching PA and the supply modulation. However, it generally has difficulty, especially for a wide bandwidth baseband application like LTE, achieving a wide bandwidth for phase signal path and highly accurate timing between envelope and phase signals. To overcome these challenges, an envelope/phase generator based on a mixer and a limiter was proposed to generate the wide bandwidth phase signal, and a timing aligner based on a delay locked loop with a variable high-pass filter (HPF) was proposed to compensate for the timing mismatch. The chip was implemented in 90 nm CMOS technology. Measured power-added efficiency (PAE) and adjacent channel leakage ratio (ACLR) were 39% and -41 dBc for WCDMA, and measured PAE and ACLR E-UTRA1 were 32% and -33 dBc for 20 MHz-BW LTE.
A Bidirectional Neural Interface IC With Chopper Stabilized BioADC Array and Charge Balanced Stimulator. We present a bidirectional neural interface with a 4-channel biopotential analog-to-digital converter (bioADC) and a 4-channel current-mode stimulator in 180 nm CMOS. The bioADC directly transduces microvolt biopotentials into a digital representation without a voltage-amplification stage. Each bioADC channel comprises a continuous-time first-order ΔΣ modulator with a chopper-stabilized OTA input ...
1.014286
0.014286
0.014286
0.014286
0.014286
0.007143
0.003571
0.000649
0
0
0
0
0
0
GraphIt: a high-performance graph DSL The performance bottlenecks of graph applications depend not only on the algorithm and the underlying hardware, but also on the size and structure of the input graph. As a result, programmers must try different combinations of a large set of techniques, which make tradeoffs among locality, work-efficiency, and parallelism, to develop the best implementation for a specific algorithm and type of graph. Existing graph frameworks and domain specific languages (DSLs) lack flexibility, supporting only a limited set of optimizations. This paper introduces GraphIt, a new DSL for graph computations that generates fast implementations for algorithms with different performance characteristics running on graphs with different sizes and structures. GraphIt separates what is computed (algorithm) from how it is computed (schedule). Programmers specify the algorithm using an algorithm language, and performance optimizations are specified using a separate scheduling language. The algorithm language simplifies expressing the algorithms, while exposing opportunities for optimizations. We formulate graph optimizations, including edge traversal direction, data layout, parallelization, cache, NUMA, and kernel fusion optimizations, as tradeoffs among locality, parallelism, and work-efficiency. The scheduling language enables programmers to easily search through this complicated tradeoff space by composing together a large set of edge traversal, vertex data layout, and program structure optimizations. The separation of algorithm and schedule also enables us to build an autotuner on top of GraphIt to automatically find high-performance schedules. The compiler uses a new scheduling representation, the graph iteration space, to model, compose, and ensure the validity of the large number of optimizations. We evaluate GraphIt’s performance with seven algorithms on graphs with different structures and sizes. GraphIt outperforms the next fastest of six state-of-the-art shared-memory frameworks (Ligra, Green-Marl, GraphMat, Galois, Gemini, and Grazelle) on 24 out of 32 experiments by up to 4.8×, and is never more than 43% slower than the fastest framework on the other experiments. GraphIt also reduces the lines of code by up to an order of magnitude compared to the next fastest framework.
k2-Trees for Compact Web Graph Representation This paper presents a Web graph representation based on a compact tree structure that takes advantage of large empty areas of the adjacency matrix of the graph. Our results show that our method is competitive with the best alternatives in the literature, offering a very good compression ratio (3.3---5.3 bits per link) while permitting fast navigation on the graph to obtain direct as well as reverse neighbors (2---15 microseconds per neighbor delivered). Moreover, it allows for extended functionality not usually considered in compressed graph representations.
Accelerating sparse matrix-vector multiplication on GPUs using bit-representation-optimized schemes The sparse matrix-vector (SpMV) multiplication routine is an important building block used in many iterative algorithms for solving scientific and engineering problems. One of the main challenges of SpMV is its memory-boundedness. Although compression has been proposed previously to improve SpMV performance on CPUs, its use has not been demonstrated on the GPU because of the serial nature of many compression and decompression schemes. In this paper, we introduce a family of bit-representation-optimized (BRO) compression schemes for representing sparse matrices on GPUs. The proposed schemes, BRO-ELL, BRO-COO, and BRO-HYB, perform compression on index data and help to speed up SpMV on GPUs through reduction of memory traffic. Furthermore, we formulate a BRO-aware matrix reodering scheme as a data clustering problem and use it to increase compression ratios. With the proposed schemes, experiments show that average speedups of 1.5× compared to ELLPACK and HYB can be achieved for SpMV on GPUs.
Compression-aware graph computation. Many recent work has focused on graph algorithms via parallelization including PowerGraph [9] and Ligra [14]. The frameworks process large graphs in shared memory, requiring a terabyte of memory and expensive maintenance cost. Reducing graph size to fit in memory thus is crucial in cutting the cost of large-scale graph computation. Compression has been widely used to reduce graph size. However, it could meanwhile compromise graph computation efficiency caused by nontrivial decompression overhead before graph computation. In this paper, we propose a simple and yet efficient coding scheme. It not only leads to smaller size of compressed graphs; meanwhile we can perform graph computation directly on the compressed graphs with no or partial decompression, namely compression-aware computation, leading to faster running time. Our experiments validate that the coding scheme achieves 2.99X compression ratio, and three compression-aware graph algorithms achieve 7.02X, 2.88X and 2.34X faster running time than the graph algorithms on the graphs without compression.
Implementing Push-Pull Efficiently In Graphblas We factor Beamer's push-pull, also known as direction-optimized breadth-first-search (DOBFS) into 3 separable optimizations, and analyze them for generalizability, asymptotic speedup, and contribution to overall speedup. We demonstrate that masking is critical for high performance and can be generalized to all graph algorithms where the sparsity pattern of the output is known a priori. We show that these graph algorithm optimizations, which together constitute DOBFS, can be neatly and separably described using linear algebra and can be expressed in the GraphBLAS linear-algebrabased framework. We provide experimental evidence that with these optimizations, a DOBFS expressed in a linear-algebra-based graph framework attains competitive performance with state-of-the-art graph frameworks on the GPU and on a multi-threaded CPU, achieving 101 GTEPS on a Scale 22 RMAT graph.
Chipyard: Integrated Design, Simulation, and Implementation Framework for Custom SoCs Continued improvement in computing efficiency requires functional specialization of hardware designs. Agile hardware design methodologies have been proposed to alleviate the increased design costs of custom silicon architectures, but their practice thus far has been accompanied with challenges in integration and validation of complex systems-on-a-chip (SoCs). We present the Chipyard framework, an integrated SoC design, simulation, and implementation environment for specialized compute systems. Chipyard includes configurable, composable, open-source, generator-based IP blocks that can be used across multiple stages of the hardware development flow while maintaining design intent and integration consistency. Through cloud-hosted FPGA accelerated simulation and rapid ASIC implementation, Chipyard enables continuous validation of physically realizable customized systems.
MapGraph: A High Level API for Fast Development of High Performance Graph Analytics on GPUs High performance graph analytics are critical for a long list of application domains. In recent years, the rapid advancement of many-core processors, in particular graphical processing units (GPUs), has sparked a broad interest in developing high performance parallel graph programs on these architectures. However, the SIMT architecture used in GPUs places particular constraints on both the design and implementation of the algorithms and data structures, making the development of such programs difficult and time-consuming. We present MapGraph, a high performance parallel graph programming framework that delivers up to 3 billion Traversed Edges Per Second (TEPS) on a GPU. MapGraph provides a high-level abstraction that makes it easy to write graph programs and obtain good parallel speedups on GPUs. To deliver high performance, MapGraph dynamically chooses among different scheduling strategies depending on the size of the frontier and the size of the adjacency lists for the vertices in the frontier. In addition, a Structure Of Arrays (SOA) pattern is used to ensure coalesced memory access. Our experiments show that, for many graph analytics algorithms, an implementation, with our abstraction, is up to two orders of magnitude faster than a parallel CPU implementation and is comparable to state-of-the-art, manually optimized GPU implementations. In addition, with our abstraction, new graph analytics can be developed with relatively little effort.
A Logic-in-Memory Computer If, as presently projected, the cost of microelectronic arrays in the future will tend to reflect the number of pins on the array rather than the number of gates, the logic-in-memory array is an extremely attractive computer component. Such an array is essentially a microelectronic memory with some combinational logic associated with each storage element. A logic-in-memory computer is described that is organized around a logic-enhanced ``cache'' memory array. Used as a cache, a logic-in-memory array performs as a high-speed buffer between a conventional CPU and a conventional memory. The effect on the computer system of the cache and its control mechanism is to make the main memory appear to have all of the processing capabilities and almost the same performance as the cache. Operations within the array are naturally organized as operations on blocks of data called ``sectors.'' Among the operations that can be performed are arithmetic and logical operations on pairs of elements from two sectors, and a variety of associative search operations on a single sector. For such operations, the main memory of the computer appears to the program to be composed of a collection of logic-in-memory arrays, each the size of a sector. Because of the high-speed, highly parallel sector operations, the logic-in-memory computer points to a new direction for achieving orders of magnitude increase in computer performance. Moreover, since the computer is specifically organized for large-scale integration, the increased performance might be obtained for a comparatively small dollar cost.
Information-driven dynamic sensor collaboration This article overviews the information-driven approach to sensor collaboration in ad hoc sensor networks. The main idea is for a network to determine participants in a "sensor collaboration" by dynamically optimizing the information utility of data for a given cost of communication and computation. A definition of information utility is introduced, and several approximate measures of the information utility are developed for reasons of computational tractability. We illustrate the use of this approach using examples drawn from tracking applications
The price of validity in dynamic networks Massive-scale self-administered networks like Peer-to-Peer and Sensor Networks have data distributed across thousands of participant hosts. These networks are highly dynamic with short-lived hosts being the norm rather than an exception. In recent years, researchers have investigated best-effort algorithms to efficiently process aggregate queries (e.g., sum, count, average, minimum and maximum) [6, 13, 21, 34, 35, 37] on these networks. Unfortunately, query semantics for best-effort algorithms are ill-defined, making it hard to reason about guarantees associated with the result returned. In this paper, we specify a correctness condition, single-site validity, with respect to which the above algorithms are best-effort. We present a class of algorithms that guarantee validity in dynamic networks. Experiments on real-life and synthetic network topologies validate performance of our algorithms, revealing the hitherto unknown price of validity.
Design Considerations for a Direct RF Sampling Mixer This brief presents a detailed time-domain and frequency-domain analysis of a direct RF sampling mixer. Design considerations such as incomplete charge sharing and large signal nonlinearity are addressed. An accurate frequency-domain transfer function is derived. Estimation of noise figure is given. The analysis applies to the design of sub-sampling mixers that have become important for software-d...
Fpga Implementation Of High-Frequency Software Radio Receiver State-of-the-art analog-to-digital converters allow the design of high-frequency software radio receivers that use baseband signal processing. However, such receivers are rarely considered in literature. In this paper, we describe the design of a high-performance receiver operating at high frequencies, whose digital part is entirely implemented in an FPGA device. The design of digital subsystem is given, together with the design of a low-cost analog front end.
A Fully Autonomous Integrated Interface Circuit for Piezoelectric Harvesters This paper presents a fully autonomous, adaptive pulsed synchronous charge extractor (PSCE) circuit optimized for piezoelectric harvesters (PEHs) which have a wide output voltage range 1.3-20 V. The PSCE chip fabricated in a 0.35 μm CMOS process is supplied exclusively by the buffer capacitor where the harvested energy is stored in. Due to the low power consumption, the chip can handle a minimum PEH output power of 5.7 μW. The system performs a startup from an uncharged buffer capacitor and operates in the adaptive mode at storage buffer voltages from 1.4 V to 5 V. By reducing the series resistance losses, the implementation of an improved switching technique increases the extracted power by up to 20% compared to the formerly presented Synchronous Electric Charge Extraction (SECE) technique and enables the chip efficiency to reach values of up to 85%. Compared to a low-voltage-drop passive full-wave rectifier, the PSCE chip increases the extracted power to 123% when the PEH is driven at resonance and to 206% at off-resonance.
A Hybrid 1<sup>st</sup>/2<sup>nd</sup>-Order VCO-Based CTDSM With Rail-to-Rail Artifact Tolerance for Bidirectional Neural Interface Bi-directional brain machine interfaces enable simultaneous brain activity monitoring and neural modulation. However stimulation artifact can saturate instrumentation front-end, while concurrent on-site recording is needed. This brief presents a voltage controlled-oscillator (VCO) based continuous-time <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\rm \Delta \Sigma $ </tex-math></inline-formula> modulator (CTDSM) with rail-to-rail input range and fast artifact tracking. A hybrid <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}/2^{nd}$ </tex-math></inline-formula> -order loop is designed to achieve high dynamic range (DR) and large input range. Stimulation artifact is detected by a phase counter and compensated by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}$ </tex-math></inline-formula> -order loop. The residue signal is digitized by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2^{nd}$ </tex-math></inline-formula> -order loop for high precision. Redundancy between the two loops is implemented as feedback capacitors elements with non-binary ratio to guarantee feedback stability and linearity. Fabricated in a 55-nm CMOS process, the prototype achieves 65.7dB SNDR across 10 kHz bandwidth with a full scale of 200 mV <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">pp</sub> . And a ±1.2 V input range is achieved to suppress artifacts. Saline based experiment with simultaneous stimulation and recording demonstrates that the implemented system can track and tolerate rail-to-rail stimulation artifact within 30 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{s}$ </tex-math></inline-formula> while small neural signal can be continuously monitored.
1.084444
0.066667
0.066667
0.066667
0.066667
0.046667
0.016667
0.001778
0
0
0
0
0
0
A Dynamic Event-Triggered Approach to State Estimation for Switched Memristive Neural Networks With Nonhomogeneous Sojourn Probabilities This paper investigates the state estimation for switched memristive neural networks with nonhomogeneous sojourn probabilities. Essentially different from most current literature, a novel switching law is developed to depict the dynamic behavior of switched memristive neural networks, in which the sojourn probabilities of each subsystem are assumed to be nonhomogeneous, and a higher-level determin...
Finite-time stabilization by state feedback control for a class of time-varying nonlinear systems. In this paper, finite-time stabilization is considered for a class of nonlinear systems dominated by a lower-triangular model with a time-varying gain. Based on the finite-time Lyapunov stability theorem and dynamic gain control design approach, state feedback finite-time stabilization controllers are proposed with gains being tuned online by two dynamic equations. Different from many existing finite-time control designs for lower-triangular nonlinear systems, the celebrated backstepping method is not utilized here. It is observed that our design procedure is much simpler, and the resulting control gains are in general not as high as those provided by the backstepping method. A simulation example is given to demonstrate the effectiveness of the proposed design procedure.
On end-to-end analysis of packet loss End-to-end analysis of packet loss is presented, addressing both overall packet loss rate and loss correlations. Voice over IP is used as an application for which packet loss performance is analyzed. In addition to independent packet losses, concepts of temporal and spatial- or across-the-domain-correlations are analyzed and their applicability to end-to-end QoS analysis is discussed.
Robust stability of hopfield delayed neural networks via an augmented L-K functional. This paper focuses on the issue of robust stability of artificial delayed neural networks. A free-matrix-based inequality strategy is produced by presenting an arrangement of slack variables, which can be optimized by means of existing convex optimization algorithms. To reflect a large portion of the dynamical behaviors of the framework, uncertain parameters are considered. By constructing an augmented Lyapunov functional, sufficient conditions are derived to guarantee that the considered neural systems are completely stable. The conditions are presented in the form of as linear matrix inequalities (LMIs). Finally, numerical cases are given to show the suitability of the results presented.
Passivity Analysis for Quaternion-Valued Memristor-Based Neural Networks With Time-Varying Delay. This paper is concerned with the problem of global exponential passivity for quaternion-valued memristor-based neural networks (QVMNNs) with time-varying delay. The QVMNNs can be seen as a switched system due to the memristor parameters are switching according to the states of the network. This is the first time that the global exponential passivity of QVMNNs with time-varying delay is investigate...
Passivity and Dissipativity of Fractional-Order Quaternion-Valued Fuzzy Memristive Neural Networks: Nonlinear Scalarization Approach In this article, the problem of passivity and dissipativity analysis is investigated for a class of fractional-order quaternion-valued fuzzy memristive neural networks. Based on the famous nonlinear scalarizing function, a nonlinear scalarization method is developed, which can be employed to compare the “size” of two different quaternions. In this way, the convex closure proposed by the quaternion-valued connection weights is meaningful. By constructing proper Lyapunov functional, several improved passivity criteria and dissipativity conclusions are established, which can be checked efficiently by utilizing some standard mathematical calculations. Finally, the obtained results are validated by simulation examples.
Finite-Time Synchronization for Fuzzy Inertial Neural Networks by Maximum Value Approach In this article, the finite-time synchronization of drive-response fuzzy inertial neural networks with delays is considered. Without applying the finite-time stability theorems and integral inequality approach, by using the maximum value approach of functions and designing two kinds of different controllers of time variable t, two criteria ensuring the finite-time synchronization for the dr...
Finite-time cluster synchronization of T-S fuzzy complex networks with discontinuous subsystems and random coupling delays This paper is concerned with the cluster synchronization in finite time for a class of complex networks with nonlinear coupling strengths and probabilistic coupling delays. The complex networks consist of several clusters of nonidentical discontinuous systems suffered from uncertain bounded external disturbance. Based on T-S fuzzy interpolation approach, we first obtain a set of T-S fuzzy complex networks with constant coupling strengths. By developing some novel Lyapunov functionals and using the concept of Filippov solution, some new analytical techniques are established to derive sufficient conditions ensuring the cluster synchronization in a setting time. In particular, this paper extends the pinning control strategies for networks with continuous-time dynamics to discontinuous networks. Numerical simulations demonstrate that the theoretical results are effective and the T-S fuzzy approach is important for relaxed results.
Deep Residual Learning for Image Recognition Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers - 8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.
Local and global properties in networks of processors (Extended Abstract) This paper attempts to get at some of the fundamental properties of distributed computing by means of the following question: “How much does each processor in a network of processors need to know about its own identity, the identities of other processors, and the underlying connection network in order for the network to be able to carry out useful functions?” The approach we take is to require that the processors be designed without any knowledge (or only very broad knowledge) of the networks they are to be used in, and furthermore, that all processors with the same number of communication ports be identical. Given a particular network function, e.g., setting up a spanning tree, we ask whether processors may be designed so that when they are embedded in any connected network and started in some initial configuration, they are guaranteed to accomplish the desired function.
Mdvm System Concept, Paging Latency And Round-2 Randomized Leader Election Algorithm In Sg The future trend in the computing paradigm is marked by mobile computing based on mobile-client/server architecture connected by wireless communication network. However, the mobile computing systems have limitations because of the resource-thin mobile clients operating on battery power. The MDVM system allows the mobile clients to utilize memory and CPU resources of Server-Groups (SG) to overcome the resource limitations of clients in order to support the high-end mobile applications such as, m-commerce and virtual organization (VO). In this paper the concept ofMDVM system and the architecture of cellular network containing the SG are discussed. A round-2 randomized distributed algorithm is proposed to elect a unique leader and co-leader of the SG. The algorithm is free from any assumption about network topology, buffer space limitations and is based on dynamically elected coordinators eliminating single point of failure. The algorithm is implemented in distributed system setup and the network-paging latency values of wired and wireless networks are measured experimentally. The experimental results demonstrate that in most cases the algorithm successfully terminates in first round and the possibility of second round execution decreases significantly with the increase in the size of SG (vertical bar N-a vertical bar). The overall message complexity of the algorithm is O(vertical bar N-a vertical bar). The comparative study of network-paging latencies indicates that 3G/4G mobile communication systems would support the realization of MDVM system.
Sequential approximation of feasible parameter sets for identification with set membership uncertainty In this paper the problem of approximating the feasible parameter set for identification of a system in a set membership setting is considered. The system model is linear in the unknown parameters. A recursive procedure providing an approximation of the parameter set of interest through parallelotopes is presented, and an efficient algorithm is proposed. Its computational complexity is similar to that of the commonly used ellipsoidal approximation schemes. Numerical results are also reported on some simulation experiments conducted to assess the performance of the proposed algorithm.
Wireless sensing and vibration control with increased redundancy and robustness design. Control systems with long distance sensor and actuator wiring have the problem of high system cost and increased sensor noise. Wireless sensor network (WSN)-based control systems are an alternative solution involving lower setup and maintenance costs and reduced sensor noise. However, WSN-based control systems also encounter problems such as possible data loss, irregular sampling periods (due to the uncertainty of the wireless channel), and the possibility of sensor breakdown (due to the increased complexity of the overall control system). In this paper, a wireless microcontroller-based control system is designed and implemented to wirelessly perform vibration control. The wireless microcontroller-based system is quite different from regular control systems due to its limited speed and computational power. Hardware, software, and control algorithm design are described in detail to demonstrate this prototype. Model and system state compensation is used in the wireless control system to solve the problems of data loss and sensor breakdown. A positive position feedback controller is used as the control law for the task of active vibration suppression. Both wired and wireless controllers are implemented. The results show that the WSN-based control system can be successfully used to suppress the vibration and produces resilient results in the presence of sensor failure.
A 12-Bit Dynamic Tracking Algorithm-Based SAR ADC With Real-Time QRS Detection A 12-bit successive approximation register (SAR) ADC based on dynamic tracking algorithm and a real-time QRS-detection algorithm are proposed. The dynamic tracking algorithm features two tracking windows which are adjacent to prediction interval. This algorithm is able to track down the input signal's variation range and automatically adjust the subrange interval and update prediction code. QRS-complex detection algorithm integrates synchronous time sequential ADC and realtime QRS-detector. The chip is fabricated in a standard 0.13 μm CMOS process with a 0.6 V supply. Measurement results show that proposed ADC exhibits 10.72 effective number of bit (ENOB) and 79.63 dB spur-free dynamic range (SFDR) at 10k Hz sample rate given 41.5 Hz sinusoid input. The DNL and INL are bounded at -0.6/0.62 LSB and -0.67/1.43 LSBs. The ADC achieves FoM of 48 fJ/conversion-step at the best case. Also, the prototype is experimented with ECG signal input and extracts the heart beat signal successfully.
1.1
0.1
0.1
0.1
0.1
0.1
0.1
0.016667
0
0
0
0
0
0
Performance Evaluation of an EDA-Based Large-Scale Plug-In Hybrid Electric Vehicle Charging Algorithm The anticipation of a large penetration of plug-in hybrid electric vehicles (PHEVs) into the market brings up many technical problems that need to be addressed. In the near future, a large number of PHEVs in our society will add a large-scale energy load to our power grids, as well as add substantial energy resources that can be utilized. An emerging issue is that a large number of PHEVs simultaneously connected to the grid may pose a huge threat to the overall power system quality and stability. In this paper, the authors propose an algorithm for optimally managing a large number of PHEVs (e.g., 3000) charging at a municipal parking station. The authors used the estimation of distribution algorithm (EDA) to intelligently allocate electrical energy to the PHEVs connected to the grid. A mathematical framework for the objective function (i.e., maximizing the average state-of-charge at the next time step) is also given. The authors considered real-world constraints such as energy price, remaining battery capacity, and remaining charging time. The authors also simulated the real-world parking deck scenarios according to the statistical analysis based on the transportation data. The authors characterized the performance of EDA using a Matlab simulation, and compared it with other optimization techniques.
The Evolution of Plug-In Electric Vehicle-Grid Interactions Over the past decade key technologies have progressed so that mass-market viable plug-in electric vehicles (PEVs) are now set to reach the first of many major vehicle markets by 2011. PEV-grid interactions comprise a mix of industries that have not interacted closely in the past. A number of these commercial participants have utilized the same basic business model for nearly a century. The various participants include vehicle manufacturers, utilities, and supplier firms who have radically different business models, regulatory and legal environments, geographical scope, and technical capabilities. This paper will provide a survey of PEV technology trends and other factors. From an analysis of these factors this paper synthesizes and provides a likely scenario for PEV-grid interaction over the next decade.
Active Damping In Dc/Dc Power Electronic Converters: A Novel Method To Overcome The Problems Of Constant Power Loads Multi-converter power electronic systems exist in land, sea, air, and space vehicles. In these systems, load converters exhibit constant power load (CPL) behavior for the feeder converters and tend to destabilize the system. In this paper, the implementation of novel active-damping techniques on dc/dc converters has been shown. Moreover, the proposed active-damping method is used to overcome the negative impedance instability problem caused by the CPLs. The effectiveness of the new proposed approach has been verified by PSpice simulations and experimental results.
Development of an Optimal Vehicle-to-Grid Aggregator for Frequency Regulation For vehicle-to-grid (V2G) frequency regulation services, we propose an aggregator that makes efficient use of the distributed power of electric vehicles to produce the desired grid-scale power. The cost arising from the battery charging and the revenue obtained by providing the regulation are investigated and represented mathematically. Some design considerations of the aggregator are also discussed together with practical constraints such as the energy restriction of the batteries. The cost function with constraints enables us to construct an optimization problem. Based on the developed optimization problem, we apply the dynamic programming algorithm to compute the optimal charging control for each vehicle. Finally, simulations are provided to illustrate the optimality of the proposed charging control strategy with variations of parameters.
The part-time parliament Recent archaeological discoveries on the island of Paxos reveal that the parliament functioned despite the peripatetic propensity of its part-time legislators. The legislators maintained consistent copies of the parliamentary record, despite their frequent forays from the chamber and the forgetfulness of their messengers. The Paxon parliament's protocol provides a new way of implementing the state machine approach to the design of distributed systems.
A Bayesian Method for the Induction of Probabilistic Networks from Data This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction of probabilistic expert systems. We extend the basic method to handle missing data and hidden (latent) variables. We show how to perform probabilistic inference by averaging over the inferences of multiple belief networks. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. Finally, we relate the methods in this paper to previous work, and we discuss open problems.
Wireless sensor networks: a survey This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed.
Fully integrated wideband high-current rectifiers for inductively powered devices This paper describes the design and implementation of fully integrated rectifiers in BiCMOS and standard CMOS technologies for rectifying an externally generated RF carrier signal in inductively powered wireless devices, such as biomedical implants, radio-frequency identification (RFID) tags, and smartcards to generate an on-chip dc supply. Various full-wave rectifier topologies and low-power circuit design techniques are employed to decrease substrate leakage current and parasitic components, reduce the possibility of latch-up, and improve power transmission efficiency and high-frequency performance of the rectifier block. These circuits are used in wireless neural stimulating microsystems, fabricated in two processes: the University of Michigan's 3-μm 1M/2P N-epi BiCMOS, and the AMI 1.5-μm 2M/2P N-well standard CMOS. The rectifier areas are 0.12-0.48 mm2 in the above processes and they are capable of delivering >25mW from a receiver coil to the implant circuitry. The performance of these integrated rectifiers has been tested and compared, using carrier signals in 0.1-10-MHz range.
Standards for XML and Web Services Security XML schemas convey the data syntax and semantics for various application domains, such as business-to-business transactions, medical records, and production status reports. However, these schemas seldom address security issues, which can lead to a worst-case scenario of systems and protocols with no security at all. At best, they confine security to transport level mechanisms such as secure sockets layer (SSL). On the other hand, the omission of security provisions from domain schemas opens the way for generic security specifications based on XML document and grammar extensions. These specifications are orthogonal to domain schemas but integrate with them to support a variety of security objectives, such as confidentiality, integrity, and access control. In 2002, several specifications progressed toward providing a comprehensive standards framework for secure XML-based applications. The paper shows some of the most important specifications, the issues they address, and their dependencies.
Random walks in peer-to-peer networks: algorithms and evaluation We quantify the effectiveness of random walks for searching and construction of unstructured peer-to-peer (P2P) networks. We have identified two cases where the use of random walks for searching achieves better results than flooding: (a) when the overlay topology is clustered, and (b) when a client re-issues the same query while its horizon does not change much. Related to the simulation of random walks is also the distributed computation of aggregates, such as averaging. For construction, we argue that an expander can be maintained dynamically with constant operations per addition. The key technical ingredient of our approach is a deep result of stochastic processes indicating that samples taken from consecutive steps of a random walk on an expander graph can achieve statistical properties similar to independent sampling. This property has been previously used in complexity theory for construction of pseudorandom number generators. We reveal another facet of this theory and translate savings in random bits to savings in processing overhead.
Online design bug detection: RTL analysis, flexible mechanisms, and evaluation Higher level of resource integration and the addition of new features in modern multi-processors put a significant pressure on their verification. Although a large amount of resources and time are devoted to the verification phase of modern processors, many design bugs escape the verification process and slip into processors operating in the field. These design bugs often lead to lower quality products, lower customer satisfaction, diminishing brand/company reputation, or even expensive product recalls.
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.22
0.22
0.22
0.055
0
0
0
0
0
0
0
0
0
0
Capacitive-coupled current sensing and Auto-ranging slope compensation for current mode SMPS with wide supply and frequency range Techniques for high impedance current sensing and slope compensation, common challenges for current mode switched mode power supplies (SMPS), are presented. DCR sensing, limited by conventional low impedance sensing techniques, is thus possible, enabling power efficiency gains. Auto-ranging slope compensation based on a multiplication of input voltage VIN and switching frequency fsw allows for truer current mode operation and superior line transient response for a wide range of VIN and fsw. The techniques are demonstrated in an automotive-class 60 V VIN buck controller at 150-600 kHz in a 0.35 mum BiCMOS technology. Offset error <3 mV over 0-10 V and single-cycle stable current loop response are measured.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Dynamic sensor collaboration via sequential Monte Carlo We consider the application of sequential Monte Carlo (SMC) methods for Bayesian inference to the problem of information-driven dynamic sensor collaboration in clutter environments for sensor networks. The dynamics of the system under consideration are described by nonlinear sensing models within randomly deployed sensor nodes. The exact solution to this problem is prohibitively complex due to the nonlinear nature of the system. The SMC methods are, therefore, employed to track the probabilistic dynamics of the system and to make the corresponding Bayesian estimates and predictions. To meet the specific requirements inherent in sensor network, such as low-power consumption and collaborative information processing, we propose a novel SMC solution that makes use of the auxiliary particle filter technique for data fusion at densely deployed sensor nodes, and the collapsed kernel representation of the a posteriori distribution for information exchange between sensor nodes. Furthermore, an efficient numerical method is proposed for approximating the entropy-based information utility in sensor selection. It is seen that under the SMC framework, the optimal sensor selection and collaboration can be implemented naturally, and significant improvement is achieved over existing methods in terms of localizing and tracking accuracies.
Dynamic key management in sensor networks Numerous key management schemes have been proposed for sensor networks. The objective of key management is to dynamically establish and maintain secure channels among communicating nodes. Desired features of key management in sensor networks include energy awareness, localized impact of attacks, and scaling to a large number of nodes. A primary challenge is managing the trade-off between providing acceptable levels of security and conserving scarce resources, in particular energy, needed for network operations. Many schemes, referred to as static schemes, have adopted the principle of key predistribution with the underlying assumption of a relatively static short-lived network (node replenishments are rare, and keys outlive the network). An emerging class of schemes, dynamic key management schemes, assumes long-lived networks with more frequent addition of new nodes, thus requiring network rekeying for sustained security and survivability. In this article we present a classification of key management schemes in sensor networks delineating their similarities and differences. We also describe a novel dynamic key management scheme, localized combinatorial keying (LOCK), and compare its security and performance with a representative static key management scheme. Finally, we outline future research directions.
Asynchronous leader election and MIS using abstract MAC layer We study leader election (LE) and computation of a maximal independent set (MIS) in wireless ad-hoc networks. We use the abstract MAC layer proposed in [14] to divorce the algorithmic complexity of solving these problems from the low-level issues of contention and collisions. We demonstrate the advantages of such a MAC layer by presenting simple asynchronous deterministic algorithms to solve LE and MIS and proving their correctness. First, we present an LE algorithm for static single-hop networks in which each process sends no more than three messages to its neighbors in the system. Next, we present an algorithm to compute an MIS in a static multi-hop network in which each process sends a constant number of messages to each of its neighbors in the communication graph.
Robust Leader Election In A Fast-Changing World We consider the problem of electing a leader among nodes in a highly dynamic network where the adversary has unbounded capacity to insert and remove nodes (including the leader) from the network and change connectivity at will. We present a randomized algorithm that (re) elects a leader in O(Dlogn) rounds with high probability, where D is a bound on the dynamic diameter of the network and n is the maximum number of nodes in the network at any point in time. We assume a model of broadcast-based communication where a node can send only 1 message of O(logn) bits per round and is not aware of the receivers in advance. Thus, our results also apply to mobile wireless adhoc networks, improving over the optimal (for deterministic algorithms) O(Dn) solution presented at FOMC 2011. We show that our algorithm is optimal by proving that any randomized Las Vegas algorithm takes at least W (Dlogn) rounds to elect a leader with high probability, which shows that our algorithm yields the best possible (up to constants) termination time.
Reliable broadcast in mobile multihop packet networks
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
Cognitive radio: brain-empowered wireless communications Cognitive radio is viewed as a novel approach for improving the utilization of a precious natural resource: the radio electromagnetic spectrum. The cognitive radio, built on a software-defined radio, is defined as an intelligent wireless communication system that is aware of its environment and uses the methodology of understanding-by-building to learn from the environment and adapt to statistical variations in the input stimuli, with two primary objectives in mind: · highly reliable communication whenever and wherever needed; · efficient utilization of the radio spectrum. Following the discussion of interference temperature as a new metric for the quantification and management of interference, the paper addresses three fundamental cognitive tasks. 1) Radio-scene analysis. 2) Channel-state estimation and predictive modeling. 3) Transmit-power control and dynamic spectrum management. This work also discusses the emergent behavior of cognitive radio.
Long short-term memory. Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.
DieHard: probabilistic memory safety for unsafe languages Applications written in unsafe languages like C and C++ are vulnerable to memory errors such as buffer overflows, dangling pointers, and reads of uninitialized data. Such errors can lead to program crashes, security vulnerabilities, and unpredictable behavior. We present DieHard, a runtime system that tolerates these errors while probabilistically maintaining soundness. DieHard uses randomization and replication to achieve probabilistic memory safety by approximating an infinite-sized heap. DieHard's memory manager randomizes the location of objects in a heap that is at least twice as large as required. This algorithm prevents heap corruption and provides a probabilistic guarantee of avoiding memory errors. For additional safety, DieHard can operate in a replicated mode where multiple replicas of the same application are run simultaneously. By initializing each replica with a different random seed and requiring agreement on output, the replicated version of Die-Hard increases the likelihood of correct execution because errors are unlikely to have the same effect across all replicas. We present analytical and experimental results that show DieHard's resilience to a wide range of memory errors, including a heap-based buffer overflow in an actual application.
A Clustering Scheme For Hierarchical Control In Mufti-Hop Wireless Networks In this paper we present a clustering scheme to create a hierarchical control structure for mufti-hop wireless networks. A cluster is defined as a subset of vertices, whose induced graph is connected. In addition, a cluster is required to obey certain constraints that are useful for management and scalability of the hierarchy. All these constraints cannot be met simultaneously for general graphs, but we show how such a clustering can be obtained for wireless network topologies. Finally, we present an efficient distributed implementation of our clustering algorithm for a set of wireless nodes to create the set of desired clusters.
Online design bug detection: RTL analysis, flexible mechanisms, and evaluation Higher level of resource integration and the addition of new features in modern multi-processors put a significant pressure on their verification. Although a large amount of resources and time are devoted to the verification phase of modern processors, many design bugs escape the verification process and slip into processors operating in the field. These design bugs often lead to lower quality products, lower customer satisfaction, diminishing brand/company reputation, or even expensive product recalls.
IEEE 802.11 wireless LAN implemented on software defined radio with hybrid programmable architecture This paper describes a prototype software defined radio (SDR) transceiver on a distributed and heterogeneous hybrid programmable architecture; it consists of a central processing unit (CPU), digital signal processors (DSPs), and pre/postprocessors (PPPs), and supports both Personal Handy Phone System (PHS), and IEEE 802.11 wireless local area network (WLAN). It also supports system switching between PHS and WLAN and over-the-air (OTA) software downloading. In this paper, we design an IEEE 802.11 WLAN around the SDR; we show the software architecture of the SDR prototype and describe how it handles the IEEE 802.11 WLAN protocol. The medium access control (MAC) sublayer functions are executed on the CPU, while the physical layer (PHY) functions such as modulation/demodulation are processed by the DSPs; higher speed digital signal processes are run on the PPP implemented on a field-programmable gate array (FPGA). The most difficult problem in implementing the WLAN in this way is meeting the short interframe space (SIFS) requirement of the IEEE 802.11 standard; we elucidate the potential weakness of the current configuration and specify a way of implementing the IEEE 802.11 protocol that avoids this problem. This paper also describes an experimental evaluation of the prototype for WLAN use, the results of which agree well with computer-simulation results.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.2
0.2
0.2
0.1
0.05
0
0
0
0
0
0
0
0
0
A $4\times4$ IR UWB Timed-Array Radar Based on 16-Channel Transmitter and Sampling Capacitor Reused Receiver A 4 × 4 impulse radio (IR) ultra-wide band (UWB) timed-array radar is proposed in this brief based on 16-channel all-digital transmitter and sampling capacitor reused receiver. 3-D beamforming is achieved by the 16-channel (4 × 4) planar low power IR UWB beamforming transmitter. UWB receiver adopts energy detection and reuses the integrating capacitor as C-DAC within an 8-bit SAR ADC to save area ...
On the spectral and power requirements for ultra-wideband transmission UWB systems based on impulse radio have the potential to provide very high data rates over short distances. In this paper, a new pulse shape is presented that satisfies the FCC spectral mask. Using this pulse, the link budget is calculated to quantify the relationship between data rate and distance. It is shown that UWB can be a good candidate for reliably transmitting 100 Mbps over distances at about 10 meters.
A UWB Impulse-Radio Timed-Array Radar With Time-Shifted Direct-Sampling Architecture in 0.18- CMOS This paper presents a ultra-wideband (UWB) impulse radio timed-array radar utilizing time-shifted direct-sampling architecture. Time shift between the sampling time of the transmitter and the receiver determines the time of arrival (TOA), and a four-element timed antenna array enables beamforming. The different time shifts among the channels at the receiver determine the object's direction of arrival (DOA). Transmitter channels have different shifts, as well, to enhance spatial selectivity. The direct-sampling receiver reconstructs the scattered waveform in the digital domain, which provides full freedom to the backend digital signal processing. The on-chip digital-to-time converter (DTC) provides all the necessary timing with a fine resolution and wide range. The proposed architecture has a range and azimuth resolution of 0.75 cm and 3 degrees, respectively. The transmitter is capable of synthesizing a variety of pulses within 800 ps at a sampling rate of 10 GS/s. The receiver has an equivalent sampling frequency of 20 GS/s while supporting the RF bandwidth from 2 to 4 GHz. The proposed designs were fabricated in a 0.18- μm standard CMOS technology with a die size of 5.4×3.3 mm2 and 5.4×5.8 mm2 for the transmitter and the receiver, respectively.
A 100 MHz PRF IR-UWB CMOS Transceiver With Pulse Shaping Capabilities and Peak Voltage Detector. This paper presents a high-rate IR-UWB transceiver chipset implemented in a 130-nm CMOS technology for WBAN and biomedical applications in the 3.1-4.9 GHz band. The transmitter is based on a pulse synthesizer and an analytical up-converted Gaussian pulse is used to predict its settings. Its measured peak-to-peak output voltage is equal to 0.9 Vpp on a 100 Ω load for a central frequency of 4 GHz, a...
A Continuous Sweep-Clock-Based Time-Expansion Impulse-Radio Radar. This paper presents a single-chip impulse-radio (IR) radar transceiver that utilizes a novel continuous sweep-clock generator. While requiring low power and small area, the proposed clock generator enables a versatile IR radar operation with millimeter resolution. The radar detection range and update rate are adjustable by an on-chip delay command circuit or by an external master. The IR radar tra...
Unreliable failure detectors for reliable distributed systems We introduce the concept of unreliable failure detectors and study how they can be used to solve Consensus in asynchronous systems with crash failures. We characterise unreliable failure detectors in terms of two properties—completeness and accuracy. We show that Consensus can be solved even with unreliable failure detectors that make an infinite number of mistakes, and determine which ones can be used to solve Consensus despite any number of crashes, and which ones require a majority of correct processes. We prove that Consensus and Atomic Broadcast are reducible to each other in asynchronous systems with crash failures; thus, the above results also apply to Atomic Broadcast. A companion paper shows that one of the failure detectors introduced here is the weakest failure detector for solving Consensus [Chandra et al. 1992].
A Fast and High Quality Multilevel Scheme for Partitioning Irregular Graphs Recently, a number of researchers have investigated a class of graph partitioning algorithms that reduce the size of the graph by collapsing vertices and edges, partition the smaller graph, and then uncoarsen it to construct a partition for the original graph (Bui and Jones, Proc. of the 6th SIAM Conference on Parallel Processing for Scientific Computing, 1993, 445-452; Hen- drickson and Leland, A Multilevel Algorithm for Partitioning Graphs, Tech. report SAND 93-1301, Sandia National Laboratories, Albuquerque, NM, 1993). From the early work it was clear that multilevel techniques held great promise; however, it was not known if they can be made to con- sistently produce high quality partitions for graphs arising in a wide range of application domains. We investigate the effectiveness of many different choices for all three phases: coarsening, partition of the coarsest graph, and refinement. In particular, we present a new coarsening heuristic (called heavy-edge heuristic) for which the size of the partition of the coarse graph is within a small factor of the size of the final partition obtained after multilevel refinement. We also present a much faster variation of the Kernighan-Lin (KL) algorithm for refining during uncoarsening. We test our scheme on a large number of graphs arising in various domains including finite element methods, linear pro- gramming, VLSI, and transportation. Our experiments show that our scheme produces partitions that are consistently better than those produced by spectral partitioning schemes in substantially smaller time. Also, when our scheme is used to compute fill-reducing orderings for sparse matrices, it produces orderings that have substantially smaller fill than the widely used multiple minimum degree algorithm.
Controllability and observability of Boolean control networks The controllability and observability of Boolean control networks are investigated. After a brief review on converting a logic dynamics to a discrete-time linear dynamics with a transition matrix, some formulas are obtained for retrieving network and its logical dynamic equations from this network transition matrix. Based on the discrete-time dynamics, the controllability via two kinds of inputs is revealed by providing the corresponding reachable sets precisely. Then the problem of observability is also solved by giving necessary and sufficient conditions.
From few to many: illumination cone models for face recognition under variable lighting and pose We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render驴or synthesize驴images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone (based on Euclidean distance within the image space). We test our face recognition method on 4,050 images from the Yale Face Database B; these images contain 405 viewing conditions (9 poses $\times$ 45 illumination conditions) for 10 individuals. The method performs almost without error, except on the most extreme lighting directions, and significantly outperforms popular recognition methods that do not use a generative model.
A world survey of artificial brain projects, Part I: Large-scale brain simulations Driven by rapid ongoing advances in computer hardware, neuroscience and computer science, Artificial Brain research and development are blossoming. This article constitutes the first half of a two-part world survey of artificial brain projects: this part dealing with large-scale brain simulations, and the second part with biologically inspired cognitive architectures (BICAs). The large-scale brain simulations we consider in depth here include those by Markram, Modha, Boahen, Horwitz, Edelman, Izhikevich, and Just. As well as reviewing the particulars of these simulation projects, we position them in a broader perspective, comparing at the different underlying definitions of the concept of ''simulation,'' noting that in many ways the projects are modeling neurosystems at different levels as well as using different methodologies.
MicroGP—An Evolutionary Assembly Program Generator This paper describes 驴GP, an evolutionary approach for generating assembly programs tuned for a specific microprocessor. The approach is based on three clearly separated blocks: an evolutionary core, an instruction library and an external evaluator. The evolutionary core conducts adaptive population-based search. The instruction library is used to map individuals to valid assembly language programs. The external evaluator simulates the assembly program, providing the necessary feedback to the evolutionary core. 驴GP has some distinctive features that allow its use in specific contexts. This paper focuses on one such context: test program generation for design validation of microprocessors. Reported results show 驴GP being used to validate a complex 5-stage pipelined microprocessor. Its induced test programs outperform an exhaustive functional test and an instruction randomizer, showing that engineers are able to automatically obtain high-quality test programs.
Practical Timing Side Channel Attacks against Kernel Space ASLR Due to the prevalence of control-flow hijacking attacks, a wide variety of defense methods to protect both user space and kernel space code have been developed in the past years. A few examples that have received widespread adoption include stack canaries, non-executable memory, and Address Space Layout Randomization (ASLR). When implemented correctly (i.e., a given system fully supports these protection methods and no information leak exists), the attack surface is significantly reduced and typical exploitation strategies are severely thwarted. All modern desktop and server operating systems support these techniques and ASLR has also been added to different mobile operating systems recently. In this paper, we study the limitations of kernel space ASLR against a local attacker with restricted privileges. We show that an adversary can implement a generic side channel attack against the memory management system to deduce information about the privileged address space layout. Our approach is based on the intrinsic property that the different caches are shared resources on computer systems. We introduce three implementations of our methodology and show that our attacks are feasible on four different x86-based CPUs (both 32- and 64-bit architectures) and also applicable to virtual machines. As a result, we can successfully circumvent kernel space ASLR on current operating systems. Furthermore, we also discuss mitigation strategies against our attacks, and propose and implement a defense solution with negligible performance overhead.
ΣΔ ADC with fractional sample rate conversion for software defined radio receiver.
A Hybrid 1<sup>st</sup>/2<sup>nd</sup>-Order VCO-Based CTDSM With Rail-to-Rail Artifact Tolerance for Bidirectional Neural Interface Bi-directional brain machine interfaces enable simultaneous brain activity monitoring and neural modulation. However stimulation artifact can saturate instrumentation front-end, while concurrent on-site recording is needed. This brief presents a voltage controlled-oscillator (VCO) based continuous-time <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\rm \Delta \Sigma $ </tex-math></inline-formula> modulator (CTDSM) with rail-to-rail input range and fast artifact tracking. A hybrid <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}/2^{nd}$ </tex-math></inline-formula> -order loop is designed to achieve high dynamic range (DR) and large input range. Stimulation artifact is detected by a phase counter and compensated by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}$ </tex-math></inline-formula> -order loop. The residue signal is digitized by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2^{nd}$ </tex-math></inline-formula> -order loop for high precision. Redundancy between the two loops is implemented as feedback capacitors elements with non-binary ratio to guarantee feedback stability and linearity. Fabricated in a 55-nm CMOS process, the prototype achieves 65.7dB SNDR across 10 kHz bandwidth with a full scale of 200 mV <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">pp</sub> . And a ±1.2 V input range is achieved to suppress artifacts. Saline based experiment with simultaneous stimulation and recording demonstrates that the implemented system can track and tolerate rail-to-rail stimulation artifact within 30 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{s}$ </tex-math></inline-formula> while small neural signal can be continuously monitored.
1.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
0
0
Linearization Of Cmos Lna'S Via Optimum Gate Biasing A FET linearization technique based on optimum gate biasing is investigated at RE A novel bias circuit is proposed to generate the gate voltage for zero 3rd-order nonlinearity of the FET transconductance. The measured data show that a peak in IIP3 occurs at a gate voltage slightly different from the one predicted by the dc theory. The origins of this offset are explained based on a Volterra series analysis and confirmed experimentally. The technique was used in a 0.25mum CMOS cellular-band CDMA LNA. At the optimum bias, the amplifier achieved a NF of 1.8dB, an IIP3 of +10.5dBm, and a power gain of 14.6dB with a current consumption of only 2mA from 2.7V supply.
A novel power optimization technique for ultra-low power RFICs This paper presents a novel power optimization technique for ultra-low power (ULP) RFICs. A new figure of merit, namely the gmfT-to-current ratio, (gmfT/ID), is defined for a MOS transistor, which accounts for both the unity-gain frequency and current consumption. It is demonstrated both analytically and experimentally that the gmfT/ID reaches its maximum value in moderate inversion region. Next, using the proposed method, a power optimized common-gate low-noise amplifier (LNA) with active load has been designed and fabricated in a CMOS 0.18μm process operating at 950MHz. Measurement results show a noise-figure (NF) of 4.9dB and a small signal gain of 15.6dB with a record-breaking power dissipation of only 100μW.
A 750 mV Fully Integrated Direct Conversion Receiver Front-End for GSM in 90-nm CMOS The design of RF integrated circuits, at the low voltage allowed by sub-scaled technologies, is particularly challenging in cellular phone applications where the received signal is surrounded by huge interferers, determining an extremely high dynamic range requirement. In-depth investigations of 1/f noise sources and second-order intermodulation distortion mechanisms in direct downconversion mixers have been carried out in the recent past. This paper proposes a fully integrated receiver front-end, including LNA and quadrature mixer, supplied at 750 mV, able to meet GSM specifications. In particular, the direct downconverter employs a feedback loop to minimize second-order common mode intermodulation distortion, generated by a pseudo-differential transconductor, adopted for minimum voltage drop. For maximum dynamic range, the commutating pair is set with an LC filter. Prototypes, realized in a 90-nm RF CMOS process, show the following performances: 51 dBm IIP2, minimum over 25 samples, 1 dB desensitization point due to 3-MHz blocker at -18 dBm, 3.5 dB noise figure (NF), integrated between 1 kHz-100 kHz, 15 kHz 1/f noise corner. The front-end IIP2 has also been characterized with the mixer feedback loop switched off, resulting in an average reduction of 18 dB.
An Ultra-Low Voltage, Low-Noise, High Linearity 900-MHz Receiver With Digitally Calibrated In-Band Feed-Forward Interferer Cancellation in 65-nm CMOS. We present an ultra-low voltage, highly linear, low noise integrated CMOS receiver operating from a 0.6-V supply. The receiver incorporates programmable, in-band feed-forward interferer cancellation at the baseband to obtain high linearity and low noise operation at ultra-low supply voltages. Being able to reject adjacent channel or far-out blockers, the digitally calibrated interferer cancellation improves the IIP3 and IIP2 by more than 13 dB and 8 dB respectively with very little impact on the receiver noise figure. As such, it breaks the trade-off between linearity and noise figure, making it possible to use a high-gain RF front-end to achieve low noise figure without affecting the linearity of the ultra-low voltage baseband circuits. The 0.6-V 900-MHz direct-conversion receiver prototype integrates a differential LNA, RF transconductors, linear quadrature current driven passive mixers, feed-forward interferer cancellation circuits, baseband variable gain transimpedance amplifiers and second-order channel-select filters. It has a nominal conversion gain of 56.4 dB, noise figure of 5 dB, IIP3 of -9.8 dBm and IIP2 of 21.4 dBm. The receiver operates reliably from 0.55-0.65 V, consumes 26.4 mW and occupies an active area of 1.7 mm(2) in a 65-nm low-power CMOS process, of which the feed-forward interferer cancellation circuits consume 11.4 mW and occupies 0.43 mm(2).
Inductorless Wideband CMOS Low-Noise Amplifiers Using Noise-Canceling Technique Two inductorless wideband low-noise amplifiers (LNAs) fabricated in a 65-nm CMOS process are presented. By using the gain-enhanced noise-canceling technique, the gain at noise-cancelling condition is increased, while the input matching is maintained. The first work is a common-source LNA with resistive shunt feedback. It achieves a maximum power gain of 10.5 dB, a bandwidth of 10 GHz, a noise figure (NF) of 2.7-3.3 dB, and an IIP3 of -3.5 dBm. The power consumption is 13.7 mW from a 1-V supply, and the area is 0.02 mm 2. The second work is a common-gate LNA. It achieves a maximum power gain of 10.7 dB, a bandwidth of 5.2 GHz, a NF of 2.9-5.4 dB, and an IIP3 of -6 dBm. The power consumption is 7 mW from a 1-V supply, and the area is 0.03 mm 2. Experimental results demonstrate that the first LNA shows the largest bandwidth, and the second LNA has the lowest power consumption among the inductorless wideband LNAs.
An Ultra-Wide-Band 0.4-10-Ghz Lna In 0.18-Mu M Cmos A two-stage ultra-wide-band CMOS low-noise amplifier (LNA). is presented. With the common-gate configuration employed as the input stage, the broad-band input matching is obtained and the noise does not rise rapidly at higher frequency. By combining the common-gate and common-source stages, the broad-band characteristic and small area are achieved by using two inductors. This LNA has been fabricated in a 0.18-mu m CMOS process. The measured power gain is 11.2-12.4 dB and noise figure is 4.4-6.5 dB with -3-dB bandwidth of 0.4-10 GHz. The measured HP3 is - 6 dBm at 6 GHz. It consumes 12 mW from a 1.8-V supply voltage and occupies only 0.42 mm(2).
A Fully Differential Band-Selective Low-Noise Amplifier for MB-OFDM UWB Receivers A band-selective low-noise amplifier (BS-LNA) for multiband orthogonal frequency-division multiplexing ultra-wide-band (UWB) receivers is presented. A switched capacitive network that controls the resonant frequency of the LC load for the band selection is used. It greatly enhances the gain and noise performance of the LNA in each frequency band without increasing power consumption. Moreover, a fu...
All-digital PLL and transmitter for mobile phones We present the first all-digital PLL and polar transmitter for mobile phones. They are part of a single-chip GSM/EDGE transceiver SoC fabricated in a 90 nm digital CMOS process. The circuits are architectured from the ground up to be compatible with digital deep-submicron CMOS processes and be readily integrateable with a digital baseband and application processor. To achieve this, we exploit the new paradigm of a deep-submicron CMOS process environment by leveraging on the fast switching times of MOS transistors, the fine lithography and the precise device matching, while avoiding problems related to the limited voltage headroom. The transmitter architecture is fully digital and utilizes the wideband direct frequency modulation capability of the all-digital PLL. The amplitude modulation is realized digitally by regulating the number of active NMOS transistor switches in accordance with the instantaneous amplitude. The conventional RF frequency synthesizer architecture, based on a voltage-controlled oscillator and phase/frequency detector and charge-pump combination, has been replaced with a digitally controlled oscillator and a time-to-digital converter. The transmitter performs GMSK modulation with less than 0.5° rms phase error, -165 dBc/Hz phase noise at 20 MHz offset, and 10 μs settling time. The 8-PSK EDGE spectral mask is met with 1.2% EVM. The transmitter occupies 1.5 mm2 and consumes 42 mA at 1.2 V supply while producing 6 dBm RF output power.
Distributed estimation and quantization An algorithm is developed for the design of a nonlinear, n-sensor, distributed estimation system subject to communication and computation constraints. The algorithm uses only bivariate probability distributions and yields locally optimal estimators that satisfy the required system constraints. It is shown that the algorithm is a generalization of the classical Lloyd-Max results
The Transitive Reduction of a Directed Graph
Stacked-Chip Implementation of On-Chip Buck Converter for Distributed Power Supply System in SiPs An on-chip buck converter which is implemented by stacking chips and suitable for on-chip distributed power supply systems is proposed. The operation of the converter with 3-D chip stacking is experimentally verified for the first time. The manufactured converter achieves a maximum power efficiency of 62% for an output current of 70 mA and a voltage conversion ratio of 0.7 with a switching frequen...
A dynamically reconfigurable system-on-a-chip architecture for future mobile digital signal processing The evolving of current and future broadband access techniques into the wireless domain introduces new and flexible network architectures with difficult and interesting challenges. The system designers are faced with a challenging set of problems that stem from access mechanisms, energy conservation, error rate, transmission speed characteristics of the wireless links and mobility aspects. This paper presents first the major challenges in realizing flexible microelectronic system solutions for digital baseband signal processing in future mobile communication applications. Based thereupon, the architecture design of flexible system-on-a-chip solutions is discussed. The focus of the paper is the introduction of a new parallel and dynamically reconfigurable hardware architecture tailored to this application area. Its performance issues and potential are discussed by the implementation of a flexible and computation-intensive component of future mobile terminals.
A 0.5-V 2.5-GHz high-gain low-power regenerative amplifier based on Colpitts oscillator topology in 65-nm CMOS This paper proposes the regenerative amplifier based on the Colpitts oscillator topology. The positive feedback amount was optimized analytically in the circuit design. The proposed regenerative amplifier was fabricated in 65 nm CMOS technology. The measurement results showed 28.7 dB gain and 6.4 dB noise figure at 2.55 GHz while consuming 120 μW under the 0.5-V power supply.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.101439
0.050737
0.034271
0.033825
0.014802
0.006024
0.000333
0.000014
0
0
0
0
0
0
A 77-dB-DR 0.65-mW 20-MHz 5th-Order Coupled Source Followers Based Low-Pass Filter A compact coupled source followers (CSFs)-based low-pass filter (LPF) topology is presented with excellent power efficiency and high linearity. It synthesizes a 3 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">rd</sup> -order low-pass transfer function in a single stage that is comprised of two CSFs and three capacitors. It can also be configured to a 2 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">nd</sup> -order by disconnecting a capacitor. A 5 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">th</sup> -order LPF prototype is designed with a cascade of two proposed filter stages in a 0.18- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text {m}$ </tex-math></inline-formula> CMOS process. Operating under a 1.3-V supply voltage, the filter consumes 0.5-mA total current and achieves a −3 dB bandwidth of 20 MHz. A total harmonic distortion (THD) of −39.5 dBc at the output is measured with a +6.6 dBm (i.e., 1.35 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$V_{{\text {pk}-\text {pk}}}$ </tex-math></inline-formula> ) 2-MHz input signal. The measured in-band 3 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">rd</sup> -order input interception point (IIP3) is +24.5 dBm. The resulting dynamic range (DR) at −40 dBc THD is 76.9 dB, with 15.3-nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\surd $ </tex-math></inline-formula> Hz averaged in-band input-referred noise. The chip occupies an active area of 0.12 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> .
An 8-bit 100-mhz cmos linear interpolation dac An 8-bit 100-MHz CMOS linear interpolation digital-to-analog converter (DAC) is presented. It applies a time-interleaved structure on an 8-bit binary-weighted DAC, using 16 evenly skewed clocks generated by a voltage-controlled delay line to realize the linear interpolation function. The linear interpolation increases the attenuation of the DAC&#39;s image components. The requirement for the analog re...
Active-RC Filters Using the Gm-Assisted OTA-RC Technique. The linearity of conventional active-RC filters is limited by the operational transconductance amplifiers (OTAs) used in the integrators. Transconductance-capacitance (Gm-C) filters are fast and can be linear- however, they are sensitive to parasitic capacitances. We explore the Gm-assisted OTA-RC technique, which is a way of combining Gm-C and active-RC integrators in a manner that enhances the l...
Analysis and Design of a High-Order Discrete-Time Passive IIR Low-Pass Filter In this paper, we propose a discrete-time IIR low-pass filter that achieves a high-order of filtering through a charge-sharing rotation. Its sampling rate is then multiplied through pipelining. The first stage of the filter can operate in either a voltage-sampling or charge-sampling mode. It uses switches, capacitors and a simple gm-cell, rather than opamps, thus being compatible with digital nanoscale technology. In the voltage-sampling mode, the gm-cell is bypassed so the filter is fully passive. A 7th-order filter prototype operating at 800 MS/s sampling rate is implemented in TSMC 65 nm CMOS. Bandwidth of this filter is programmable between 400 kHz to 30 MHz with 100 dB maximum stop-band rejection. Its IIP3 is +21 dBm and the averaged spot noise is 4.57 nV/√Hz. It consumes 2 mW at 1.2 V and occupies 0.42 mm2.
A Clock-Phase Reuse Technique for Discrete-Time Bandpass Filters In this article, we apply a new clock-phase reuse technique to a discrete-time infinite impulse response (IIR) complex-signaling bandpass filter (BPF). This leads to a deep improvement in filtering, especially the stopband rejection, while maintaining the area, sampling frequency, and the number of clock phases and their pulsewidths. Fabricated in 28-nm CMOS, the proposed BPF is highly tuneable an...
Analysis and Optimization of Current-Driven Passive Mixers in Narrowband Direct-Conversion Receivers Properties of the current-driven passive mixer are explored to maximize its performance in a zero-IF receiver. Since there is no reverse isolation between the RF and baseband sides of the mixer, the mixer reflects the baseband impedance to the RF and vice versa through simple frequency shifting. It is also shown that in an IQ down-conversion system the lack of reverse isolation causes a mutual interaction between the two quadrature mixers, which results in different high-and low-side conversion gains, and unexpected IIP2 and IIP3 values. With a thorough and accurate mathematical analysis it is shown how to design this mixer and its current buffer, and how to size components to get the best linearity, conversion gain and noise figure while alleviating the IQ cross-talk problem.
Tapestry: a resilient global-scale overlay for service deployment We present Tapestry, a peer-to-peer overlay routing infrastructure offering efficient, scalable, location-independent routing of messages directly to nearby copies of an object or service using only localized resources. Tapestry supports a generic decentralized object location and routing applications programming interface using a self-repairing, soft-state-based routing layer. The paper presents the Tapestry architecture, algorithms, and implementation. It explores the behavior of a Tapestry deployment on PlanetLab, a global testbed of approximately 100 machines. Experimental results show that Tapestry exhibits stable behavior and performance as an overlay, despite the instability of the underlying network layers. Several widely distributed applications have been implemented on Tapestry, illustrating its utility as a deployment infrastructure.
A Low-Power Fast-Transient 90-nm Low-Dropout Regulator With Multiple Small-Gain Stages A power-efficient 90-nm low-dropout regulator (LDO) with multiple small-gain stages is proposed in this paper. The proposed channel-resistance-insensitive small-gain stages provide loop gain enhancements without introducing low-frequency poles before the unity-gain frequency (UGF). As a result, both the loop gain and bandwidth of the LDO are improved, so that the accuracy and response speed of voltage regulation are significantly enhanced. As no on-chip compensation capacitor is required, the active chip area of the LDO is only 72.5 μm × 37.8 μm. Experimental results show that the LDO is capable of providing an output of 0.9 V with maximum output current of 50 mA from a 1-V supply. The LDO has a quiescent current of 9.3 μA, and has significantly improvement in line and load transient responses as well as performance in power-supply rejection ratio (PSRR).
Energy-Efficient Communication Protocol for Wireless Microsensor Networks Wireless distributed micro-sensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks.Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multihop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster base stations (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic net-works, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show that LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional routing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.
Type-2 Fuzzy Sets and Systems: An Overview [corrected reprint] As originally published in the February 2007 issue of IEEE Computational Intelligence Magazine, the above titled paper (ibid., vol. 2, no. 1, pp. 20-29, Feb 07) contained errors in mathematics that were introduced by the publisher. The corrected version is reprinted in its entirety.
An almost necessary and sufficient condition for robust stability of closed-loop systems with disturbance observer The disturbance observer (DOB)-based controller has been widely employed in industrial applications due to its powerful ability to reject disturbances and compensate plant uncertainties. In spite of various successful applications, no necessary and sufficient condition for robust stability of the closed loop systems with the DOB has been reported in the literature. In this paper, we present an almost necessary and sufficient condition for robust stability when the Q-filter has a sufficiently small time constant. The proposed condition indicates that robust stabilization can be achieved against arbitrarily large (but bounded) uncertain parameters, provided that an outer-loop controller stabilizes the nominal system, and uncertain plant is of minimum phase.
Cross-layer sensors for green cognitive radio. Green cognitive radio is a cognitive radio (CR) that is aware of sustainable development issues and deals with an additional constraint as regards the decision-making function of the cognitive cycle. In this paper, it is explained how the sensors distributed throughout the different layers of our CR model could help on taking the best decision in order to best contribute to sustainable development.
Variable Off-Time Control Loop for Current-Mode Floating Buck Converters in LED Driving Applications A versatile controller architecture, used in current-mode floating buck converters for LED driving, is developed. State-of-the-art controllers rely on a fixed switching period and variable duty cycle, focusing on current averaging circuits. Instead, the proposed controller architecture is based on fixed peak current and adaptable off time as the average current control method. The control loop is comprised of an averaging block, transconductance amplifier, and an innovative time modulator. This modulator is intended to provide constant control loop response regardless of input voltage, current storage inductor, and number of LEDs in order to improve converter applicability for LED drivers. Fabricated in a 5 V standard 0.5 μm CMOS technology, the prototype controller is implemented and tested in a current-mode floating buck converter. The converter exhibits sound continuous conduction mode (CCM) operation for input voltages between 11 and 20 V, and a wide inductor range of 100-1000 μH. In all instances, the measured average LED current variation was lower than 10% of the desired value. A maximum conversion efficiency of 91% is obtained when driving 50 mA through four LEDs (with 14 V input voltage and an inductor of 470 μH). A stable CCM converter operation is also proven by simulation for nine LEDs and 45 V input voltage.
An Energy-Efficient SAR ADC With Event-Triggered Error Correction This brief presents an energy-efficient fully differential 10-bit successive approximation register (SAR) analog-to-digital converter (ADC) with a resolution of 10 bits and a sampling rate of 320 kS/s. The optimal capacitor split and bypass number is analyzed to achieve the highest switching energy efficiency. The common-mode voltage level remains constant during the MSB-capacitor switching cycles. To minimize nonlinearity due to charge averaging voltage offset or DAC array mismatch, an event-triggered error correction method is employed as a redundant cycle for detecting digital code errors within 1 least significant bit (LSB). A test chip was fabricated using the 180-nm CMOS process and occupied a 0.0564-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> core area. Under a regular 0.65-V supply voltage, the ADC achieved an effective number of bits of 9.61 bits and a figure of merit (FOM) of 6.38 fJ/conversion step, with 1.6- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${ \mu }\text{W}$ </tex-math></inline-formula> power dissipation for a low-frequency input. The measured differential and integral nonlinearity results are within 0.30 LSB and 0.43 LSB, respectively.
1.2
0.2
0.2
0.2
0.1
0.013333
0
0
0
0
0
0
0
0
Grid Influenced Peer-to-Peer Energy Trading This paper proposes a peer-to-peer (P2P) energy trading scheme that can help a centralized power system to reduce the total electricity demand of its customers at the peak hour. To do so, a cooperative Stackelberg game is formulated, in which the centralized power system acts as the leader that needs to decide on a price at the peak demand period to incentivize prosumers to not seek any energy from it. The prosumers, on the other hand, act as followers and respond to the leader’s decision by forming suitable coalitions with neighboring prosumers in order to participate in P2P energy trading to meet their energy demand. The properties of the proposed Stackelberg game are studied. It is shown that the game has a unique and stable Stackelberg equilibrium, as a result of the stability of prosumers’ coalitions. At the equilibrium, the leader chooses its strategy using a derived closed-form expression, while the prosumers choose their equilibrium coalition structure. An algorithm is proposed that enables the centralized power system and the prosumers to reach the equilibrium solution. Numerical case studies demonstrate the beneficial properties of the proposed scheme.
Estimation of entropy and mutual information We present some new results on the nonparametric estimation of entropy and mutual information. First, we use an exact local expansion of the entropy function to prove almost sure consistency and central limit theorems for three of the most commonly used discretized information estimators. The setup is related to Grenander's method of sieves and places no assumptions on the underlying probability measure generating the data. Second, we prove a converse to these consistency theorems, demonstrating that a misapplication of the most common estimation techniques leads to an arbitrarily poor estimate of the true information, even given unlimited data. This "inconsistency" theorem leads to an analytical approximation of the bias, valid in surprisingly small sample regimes and more accurate than the usual 1/N formula of Miller and Madow over a large region of parameter space. The two most practical implications of these results are negative: (1) information estimates in a certain data regime are likely contaminated by bias, even if "bias-corrected" estimators are used, and (2) confidence intervals calculated by standard techniques drastically underestimate the error of the most common estimation methods.Finally, we note a very useful connection between the bias of entropy estimators and a certain polynomial approximation problem. By casting bias calculation problems in this approximation theory framework, we obtain the best possible generalization of known asymptotic bias results. More interesting, this framework leads to an estimator with some nice properties: the estimator comes equipped with rigorous bounds on the maximum error over all possible underlying probability distributions, and this maximum error turns out to be surprisingly small. We demonstrate the application of this new estimator on both real and simulated data.
Subspace pursuit for compressive sensing signal reconstruction We propose a new method for reconstruction of sparse signals with and without noisy perturbations, termed the subspace pursuit algorithm. The algorithm has two important characteristics: low computational complexity, comparable to that of orthogonal matching pursuit techniques when applied to very sparse signals, and reconstruction accuracy of the same order as that of linear programming (LP) optimization methods. The presented analysis shows that in the noiseless setting, the proposed algorithm can exactly reconstruct arbitrary sparse signals provided that the sensing matrix satisfies the restricted isometry property with a constant parameter. In the noisy setting and in the case that the signal is not exactly sparse, it can be shown that the mean-squared error of the reconstruction is upper-bounded by constant multiples of the measurement and signal perturbation energies.
Household Electricity Demand Forecast Based on Context Information and User Daily Schedule Analysis From Meter Data The very short-term load forecasting (VSTLF) problem is of particular interest for use in smart grid and automated demand response applications. An effective solution for VSTLF can facilitate real-time electricity deployment and improve its quality. In this paper, a novel approach to model the very short-term load of individual households based on context information and daily schedule pattern analysis is proposed. Several daily behavior pattern types were obtained by analyzing the time series of daily electricity consumption, and context features from various sources were collected and used to establish a rule set for use in anticipating the likely behavior pattern type of a specific day. Meanwhile, an electricity consumption volume prediction model was developed for each behavior pattern type to predict the load at a specific time point in a day. This study was concerned with solving the VSTLF for individual households in Taiwan. The proposed approach obtained an average mean absolute percentage error (MAPE) of 3.23% and 2.44% for forecasting individual household load and aggregation load 30-min ahead, respectively, which is more favorable than other methods.
A Consensus-Based Cooperative Control of PEV Battery and PV Active Power Curtailment for Voltage Regulation in Distribution Networks. The rapid growth of rooftop photovoltaic (PV) arrays installed in residential houses leads to serious voltage quality problems in low voltage distribution networks (LVDNs). In this paper, a combined method using the battery energy management of plug-in electric vehicles (PEVs) and the active power curtailment of PV arrays is proposed to regulate voltage in LVDNs with high penetration level of PV r...
Multi-Agent Based Transactive Energy Management Systems for Residential Buildings with Distributed Energy Resources Proper management of building loads and distributed energy resources (DER) can offer grid assistance services in transactive energy (TE) frameworks besides providing cost savings for the consumer. However, most TE models require building loads and DER units to be managed by external entities (e.g., aggregators), and in some cases, consumers need to provide critical information related to their ele...
Tapestry: a resilient global-scale overlay for service deployment We present Tapestry, a peer-to-peer overlay routing infrastructure offering efficient, scalable, location-independent routing of messages directly to nearby copies of an object or service using only localized resources. Tapestry supports a generic decentralized object location and routing applications programming interface using a self-repairing, soft-state-based routing layer. The paper presents the Tapestry architecture, algorithms, and implementation. It explores the behavior of a Tapestry deployment on PlanetLab, a global testbed of approximately 100 machines. Experimental results show that Tapestry exhibits stable behavior and performance as an overlay, despite the instability of the underlying network layers. Several widely distributed applications have been implemented on Tapestry, illustrating its utility as a deployment infrastructure.
A Low-Power Fast-Transient 90-nm Low-Dropout Regulator With Multiple Small-Gain Stages A power-efficient 90-nm low-dropout regulator (LDO) with multiple small-gain stages is proposed in this paper. The proposed channel-resistance-insensitive small-gain stages provide loop gain enhancements without introducing low-frequency poles before the unity-gain frequency (UGF). As a result, both the loop gain and bandwidth of the LDO are improved, so that the accuracy and response speed of voltage regulation are significantly enhanced. As no on-chip compensation capacitor is required, the active chip area of the LDO is only 72.5 μm × 37.8 μm. Experimental results show that the LDO is capable of providing an output of 0.9 V with maximum output current of 50 mA from a 1-V supply. The LDO has a quiescent current of 9.3 μA, and has significantly improvement in line and load transient responses as well as performance in power-supply rejection ratio (PSRR).
Energy-Efficient Communication Protocol for Wireless Microsensor Networks Wireless distributed micro-sensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks.Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multihop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster base stations (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic net-works, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show that LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional routing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.
TaintDroid: An Information-Flow Tracking System for Realtime Privacy Monitoring on Smartphones Today’s smartphone operating systems frequently fail to provide users with visibility into how third-party applications collect and share their private data. We address these shortcomings with TaintDroid, an efficient, system-wide dynamic taint tracking and analysis system capable of simultaneously tracking multiple sources of sensitive data. TaintDroid enables realtime analysis by leveraging Android’s virtualized execution environment. TaintDroid incurs only 32&percnt; performance overhead on a CPU-bound microbenchmark and imposes negligible overhead on interactive third-party applications. Using TaintDroid to monitor the behavior of 30 popular third-party Android applications, in our 2010 study we found 20 applications potentially misused users’ private information; so did a similar fraction of the tested applications in our 2012 study. Monitoring the flow of privacy-sensitive data with TaintDroid provides valuable input for smartphone users and security service firms seeking to identify misbehaving applications.
An almost necessary and sufficient condition for robust stability of closed-loop systems with disturbance observer The disturbance observer (DOB)-based controller has been widely employed in industrial applications due to its powerful ability to reject disturbances and compensate plant uncertainties. In spite of various successful applications, no necessary and sufficient condition for robust stability of the closed loop systems with the DOB has been reported in the literature. In this paper, we present an almost necessary and sufficient condition for robust stability when the Q-filter has a sufficiently small time constant. The proposed condition indicates that robust stabilization can be achieved against arbitrarily large (but bounded) uncertain parameters, provided that an outer-loop controller stabilizes the nominal system, and uncertain plant is of minimum phase.
Cross-layer sensors for green cognitive radio. Green cognitive radio is a cognitive radio (CR) that is aware of sustainable development issues and deals with an additional constraint as regards the decision-making function of the cognitive cycle. In this paper, it is explained how the sensors distributed throughout the different layers of our CR model could help on taking the best decision in order to best contribute to sustainable development.
27.9 A 200kS/s 13.5b integrated-fluxgate differential-magnetic-to-digital converter with an oversampling compensation loop for contactless current sensing High voltage applications such as electric motor controllers, solar panel power inverters, electric vehicle battery chargers, uninterrupted and switching mode power supplies benefit from the galvanic isolation of contactless current sensors (CCS) [1]. These include magnetic sensors that sense the magnetic field emanating from a current-carrying conductor. The offset and resolution of Hall-effect sensors is in the μT-level [1-3], in contrast to the μT-level accuracy of integrated-fluxgate (IFG) magnetometers [4]. Previously reported sampled-data closed-loop IFG readouts have limited BWs as their sampling frequencies (4) are limited to be less than or equal to the IFG excitation frequency, fEXC [5-7]. This paper describes a differential closed-loop IFG CCS with fs>fEXC. The differential architecture rejects magnetic stray fields and achieves 750x larger BW than the prior closed-loop IFG readouts [6-7] with 10×better offset than the Hall-effect sensors [1-3].
An Energy-Efficient SAR ADC With Event-Triggered Error Correction This brief presents an energy-efficient fully differential 10-bit successive approximation register (SAR) analog-to-digital converter (ADC) with a resolution of 10 bits and a sampling rate of 320 kS/s. The optimal capacitor split and bypass number is analyzed to achieve the highest switching energy efficiency. The common-mode voltage level remains constant during the MSB-capacitor switching cycles. To minimize nonlinearity due to charge averaging voltage offset or DAC array mismatch, an event-triggered error correction method is employed as a redundant cycle for detecting digital code errors within 1 least significant bit (LSB). A test chip was fabricated using the 180-nm CMOS process and occupied a 0.0564-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> core area. Under a regular 0.65-V supply voltage, the ADC achieved an effective number of bits of 9.61 bits and a figure of merit (FOM) of 6.38 fJ/conversion step, with 1.6- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${ \mu }\text{W}$ </tex-math></inline-formula> power dissipation for a low-frequency input. The measured differential and integral nonlinearity results are within 0.30 LSB and 0.43 LSB, respectively.
1.2
0.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
0
Convergence of Distributed Accelerated Algorithm Over Unbalanced Directed Networks In this article, the problem of the distributed convex optimization is investigated, where the target is to collectively minimize a sum of local convex functions over an unbalanced directed multiagent network. Each agent in the network possesses only its private local objective function, and the sum of all local objective functions constitutes the global objective function. We particularly conside...
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
30-Gb/s 1.11-pJ/bit Single-Ended PAM-3 Transceiver for High-Speed Memory Links A 30-Gb/s three-level pulse amplitude modulation (PAM-3) transceiver is designed with a one-tap tri-level decision feedback equalizer (DFE) to realize a high-speed dynamic random access memory (DRAM) interface via the 28-nm CMOS process. A 1.5-bit/pin bit efficiency is achieved by encoding and decoding 3-bit data in two unit intervals (UIs). The half-rate PAM-3 transmitter modulates single-ended pseudorandom binary sequence (PRBS) 7/15 data using a low-power encoding logic and an output driver. The receiver achieves a bit error rate (BER) of less than 1E-12 over an 80-mm FR-4 printed circuit board (PCB) channel. At the maximum data rate, the bit efficiency of the transceiver is 1.11 pJ/bit, consuming 33.4 mW. In the receiver, the attenuated PAM-3 data are equalized by a continuous-time linear equalizer (CTLE) and a one-tap tri-level DFE, which has the same complexity as that of non-return-to-zero (NRZ) signaling. The tri-state buffers, which have a floating PMOS switch, convert the output of the comparator into NRZ data, resulting in reduced delay and power dissipation. Four channels of the transceivers operate at data rates of up to 30 $\times $ 4 Gb/s, and the horizontal eye margin of the measured PAM-3 data is achieved at a UI of 0.14 for the PRBS-7 pattern at the maximum data rate.
Current-Mode Triline Transceiver for Coded Differential Signaling Across On-Chip Global Interconnects. This paper presents a current-mode triline ternarylevel coded differential signaling scheme for high-speed data transmission across on-chip global interconnects. An energy efficient current-mode triline transceiver pair suitable for this signaling scheme has been proposed. Compared with a voltage mode receiver with resistive termination, the proposed active terminated current-mode receiver reduces...
A 32-Gb/s PAM-4 Quarter-Rate Clock and Data Recovery Circuit With an Input Slew-Rate Tolerant Selective Transition Detector We present a 32-Gb/s PAM-4 quarter-rate clock and data recovery (CDR) circuit having a newly proposed selective transition detector (STD). The STD allows phase detection of PAM-4 data in a simple manner by eliminating middle transition and majority voting with simple logic gates. In addition, using the edge-rotating technique with quarter-rate CDR operation, our CDR achieves power consumption and chip area reduction. A prototype 32-Gb/s quarter-rate PAM-4 CDR circuit is realized with 28-nm CMOS technology. The CDR circuit consumes 32 mW with 1.2-V supply and the recovered clock signal has 0.0136-UI rms jitter.
A 1.02-pJ/b 20.83-Gb/s/Wire USR Transceiver Using CNRZ-5 in 16-nm FinFET. An energy-efficient (1.02 pJ/b) and high-speed (20.83 Gb/s/wire, 417 Gb/s/mm) link for ultra-short reach (USR) applications (up to 6-dB channel loss at the Nyquist frequency of 12.5 GHz) is presented. Correlated non-return to zero (CNRZ) signaling with low sensitivity to inter-symbol interference (ISI) has been developed to improve the link budget. In addition to high pin efficiency (5b6w: 5 bits ...
A Single-Ended Parallel Transceiver With Four-Bit Four-Wire Four-Level Balanced Coding for the Point-to-Point DRAM Interface. A four-bit four-wire four-level (4B4W4L) single-ended parallel transceiver for the point-to-point DRAM interface achieved a peak reduction of ~10 dB in the electromagnetic interference (EMI) H-field power, compared to a conventional 4-bit parallel binary transceiver with the same output driver power of transmitter (TX) and the same input voltage margin of receiver (RX). A four-level balanced codin...
A 0.14-to-0.29-pJ/bit 14-GBaud/s Trimodal (NRZ/PAM-4/PAM-8) Half-Rate Bang-Bang Clock and Data Recovery (BBCDR) Circuit in 28-nm CMOS This paper reports a half-rate bang-bang clock and data recovery (BBCDR) circuit supporting the trimodal (NRZ/PAM-4/PAM-8) operation. The observation of their crossover- points distribution at the transitions introduces the single-loop phase tracking technique. In addition, low-power techniques at both the architecture and circuit levels are employed to greatly improve the overall energy efficiency and multiply data throughput by increasing the number of levels on the magnitude. Fabricated in 28-nm CMOS, our BBCDR prototype scores a 0.29/0.17/0.14 pJ/bit efficiency at 14.4/28.8/43.2 Gb/s under NRZ/PAM-4/PAM-8 modes, respectively. The jitter is <; 0.53 ps (integrated from 100 Hz to 1 GHz) with approximately-equivalent constant loop bandwidth, and we achieve at least 1-UIpp jitter tolerance up to 10 MHz for all the three modes.
Tapestry: a resilient global-scale overlay for service deployment We present Tapestry, a peer-to-peer overlay routing infrastructure offering efficient, scalable, location-independent routing of messages directly to nearby copies of an object or service using only localized resources. Tapestry supports a generic decentralized object location and routing applications programming interface using a self-repairing, soft-state-based routing layer. The paper presents the Tapestry architecture, algorithms, and implementation. It explores the behavior of a Tapestry deployment on PlanetLab, a global testbed of approximately 100 machines. Experimental results show that Tapestry exhibits stable behavior and performance as an overlay, despite the instability of the underlying network layers. Several widely distributed applications have been implemented on Tapestry, illustrating its utility as a deployment infrastructure.
A Low-Power Fast-Transient 90-nm Low-Dropout Regulator With Multiple Small-Gain Stages A power-efficient 90-nm low-dropout regulator (LDO) with multiple small-gain stages is proposed in this paper. The proposed channel-resistance-insensitive small-gain stages provide loop gain enhancements without introducing low-frequency poles before the unity-gain frequency (UGF). As a result, both the loop gain and bandwidth of the LDO are improved, so that the accuracy and response speed of voltage regulation are significantly enhanced. As no on-chip compensation capacitor is required, the active chip area of the LDO is only 72.5 μm × 37.8 μm. Experimental results show that the LDO is capable of providing an output of 0.9 V with maximum output current of 50 mA from a 1-V supply. The LDO has a quiescent current of 9.3 μA, and has significantly improvement in line and load transient responses as well as performance in power-supply rejection ratio (PSRR).
Energy-Efficient Communication Protocol for Wireless Microsensor Networks Wireless distributed micro-sensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks.Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multihop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster base stations (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic net-works, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show that LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional routing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.
Type-2 Fuzzy Sets and Systems: An Overview [corrected reprint] As originally published in the February 2007 issue of IEEE Computational Intelligence Magazine, the above titled paper (ibid., vol. 2, no. 1, pp. 20-29, Feb 07) contained errors in mathematics that were introduced by the publisher. The corrected version is reprinted in its entirety.
Codejail: Application-Transparent Isolation of Libraries with Tight Program Interactions.
Distributed Primal-Dual Subgradient Method for Multiagent Optimization via Consensus Algorithms. This paper studies the problem of optimizing the sum of multiple agents' local convex objective functions, subject to global convex inequality constraints and a convex state constraint set over a network. Through characterizing the primal and dual optimal solutions as the saddle points of the Lagrangian function associated with the problem, we propose a distributed algorithm, named the distributed primal-dual subgradient method, to provide approximate saddle points of the Lagrangian function, based on the distributed average consensus algorithms. Under Slater's condition, we obtain bounds on the convergence properties of the proposed method for a constant step size. Simulation examples are provided to demonstrate the effectiveness of the proposed method.
Towards elastic SDR architectures using dynamic task management. SDR platforms integrating several types and numbers of processing elements in System-on-Chips become an attractive solution for baseband processing in wireless systems. In order to cope with the diversity of protocol applications and the heterogeneity of multi-core architectures, a hierarchical approach for workload distribution is proposed in this paper. Specifically, a system-level scheduler is employed to map applications to multiple processing clusters, complemented with a cluster-level scheduler - the CoreManager - for dynamic resource allocation and configuration as well as for task and data scheduling. A performance analysis of the proposed approach is presented, which shows the advantages of dynamic scheduling against a static approach for variable workloads in the LTE-Advanced uplink multi-user scenarios.
An Energy-Efficient SAR ADC With Event-Triggered Error Correction This brief presents an energy-efficient fully differential 10-bit successive approximation register (SAR) analog-to-digital converter (ADC) with a resolution of 10 bits and a sampling rate of 320 kS/s. The optimal capacitor split and bypass number is analyzed to achieve the highest switching energy efficiency. The common-mode voltage level remains constant during the MSB-capacitor switching cycles. To minimize nonlinearity due to charge averaging voltage offset or DAC array mismatch, an event-triggered error correction method is employed as a redundant cycle for detecting digital code errors within 1 least significant bit (LSB). A test chip was fabricated using the 180-nm CMOS process and occupied a 0.0564-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> core area. Under a regular 0.65-V supply voltage, the ADC achieved an effective number of bits of 9.61 bits and a figure of merit (FOM) of 6.38 fJ/conversion step, with 1.6- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${ \mu }\text{W}$ </tex-math></inline-formula> power dissipation for a low-frequency input. The measured differential and integral nonlinearity results are within 0.30 LSB and 0.43 LSB, respectively.
1.2
0.2
0.2
0.2
0.1
0.05
0
0
0
0
0
0
0
0
Saw-Less Software-Defined Radio Transceivers In 40nm Cmos The introduction of several new cellular and connectivity radio standards has attracted the wireless industry to the concept of software-defined radio systems, preferably implemented in advanced nanometer CMOS technology. A first generation of transceivers, using several advances in new circuits and architectures, combined with extensive digital compensation techniques, are indeed able to operate over the complete range of both RF frequencies and baseband bandwidths and as such act like an SDR.However, a real SDR must go further than this. Interoperability and coexistence scenarios, combined with the need to eliminate external fixed-frequency acoustic RF filters, lead to much more stringent requirements on linearity and noise. Therefore, this paper will also present a novel second generation of 40nm CMOS transceivers that enable this. On the TX side, it is crucial to achieve -160dBc/Hz noise level for all possible combinations of RF frequency, baseband bandwidth, and RX-TX duplex spacing. In the receiver, extremely linear circuits are presented, that are able to handle blockers of around 0dBm input level.
An Incremental-Charge-Based Digital Transmitter With Built-in Filtering A fully integrated transmitter architecture operating in the charge-domain with incremental signaling is presented. The architecture provides improved out-of-band noise performance, thanks to an intrinsic low-pass noise filtering capability, reduced quantization noise scaled by capacitance ratios, and sinc 2 alias attenuation due to a quasi-linear reconstruction interpolation. With a respective un...
A fully digital multimode polar transmitter employing 17b RF DAC in 3G mode.
Design Considerations for a Direct Digitally Modulated WLAN Transmitter With Integrated Phase Path and Dynamic Impedance Modulation. A 65-nm digitally modulated polar TX for WLAN 802.11g is fully integrated along with baseband digital filtering. The TX employs dynamic impedance modulation to improve efficiency at back-off powers. High-bandwidth phase modulation is achieved efficiently with an open-loop architecture. Operating from 1.2-V/1-V supplies, the TX delivers 16.8 dBm average power at -28-dB EVM with 24.5% drain efficien...
A Switched-Capacitor RF Power Amplifier A fully integrated switched-capacitor power amplifier (SCPA) utilizes switched-capacitor techniques in an EER/Polar architecture. It operates on the envelope of a nonconstant envelope modulated signal as an RF-DAC in order to amplify the signal efficiently. The measured maximum output power and PAE are 25.2 dBm and 45%, respectively. When amplifying an 802.11g 64-QAM orthogonal frequency-division multiplexing (OFDM) signal, the measured error vector magnitude is 2.6% and the average output power and power-added efficiencies are 17.7 dBm and 27%, respectively.
A CMOS IQ direct digital RF modulator with embedded RF FIR-based quantization noise filter This paper presents a new approach to reduce the out of band quantization noise of Direct Digital RF Modulators (DDRM). The DDRM is organized in a FIR-like configuration to filter the quantization noise in the RX band directly at RF. To demonstrate the principle, a 0.9 GHz FIR IQ DDRM has been integrated in 130 nm CMOS. The transmitter achieves more than 22 dB reduction in the quantization noise floor to reach -152 dBc/Hz@20 MHz with a 200 kHz baseband tone. The actual DDRM is capable of both amplitude and phase modulation by using a new four-phases IQ architecture. This results in a reduced power consumption and chip area. The transmitter consumes 94 mW from a 2.7 V supply and achieves an average output power of 9.5 dBm. Leakage into the adjacent channel and into the next one of -35 dB and -53 dB, respectively have been measured for a 10 MHz OFDM signal. It also achieves -27.2 dB EVM with a 64QAM input signal.
CMOS Doherty Amplifier With Variable Balun Transformer and Adaptive Bias Control for Wireless LAN Application This paper presents a novel CMOS Doherty power amplifier (PA) with an impedance inverter using a variable balun transformer (VBT) and adaptive bias control of an auxiliary amplifier. Unlike a conventional quarter-wavelength (λ/4) transmission line impedance inverter of a Doherty PA, the proposed VBT impedance inverter can achieve load modulation without any phase delay circuit. As a result, a λ/4 phase compensation circuit at the input path of the auxiliary amplifier can be removed, and the total size of the Doherty PA can be reduced. Additionally, an enhancement of the power efficiency at backed-off power levels can successfully be achieved with an adaptive gate bias in a common gate stage of the auxiliary amplifier. The PA, fabricated with 0.13-μm CMOS technology, achieved a 1-dB compression point (P1 dB) of 31.9 dBm and a power-added efficiency (PAE) at P1 dB of 51%. When the PA is tested with 802.11g WLAN orthogonal frequency division multiplexing (OFDM) signal of 54 Mb/s, a 25-dB error vector magnitude (EVM) compliant output power of 22.8 dBm and a PAE of 30.1% are obtained, respectively.
Impossibility of distributed consensus with one faulty process The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the “Byzantine Generals” problem.
Robust Stochastic Approximation Approach to Stochastic Programming In this paper we consider optimization problems where the objective function is given in a form of the expectation. A basic difficulty of solving such stochastic optimization problems is that the involved multidimensional integrals (expectations) cannot be computed with high accuracy. The aim of this paper is to compare two computational approaches based on Monte Carlo sampling techniques, namely, the stochastic approximation (SA) and the sample average approximation (SAA) methods. Both approaches, the SA and SAA methods, have a long history. Current opinion is that the SAA method can efficiently use a specific (say, linear) structure of the considered problem, while the SA approach is a crude subgradient method, which often performs poorly in practice. We intend to demonstrate that a properly modified SA approach can be competitive and even significantly outperform the SAA method for a certain class of convex stochastic problems. We extend the analysis to the case of convex-concave stochastic saddle point problems and present (in our opinion highly encouraging) results of numerical experiments.
Timing Recovery in Digital Synchronous Data Receivers A new class of fast-converging timing recovery methods for synchronous digital data receivers is investigated. Starting with a worst-case timing offset, convergence with random binary data will typically occur within 10-20 symbols. The input signal is sampled at the baud rate; these samples are then processed to derive a suitable control signal to adjust the timing phase. A general method is outlined to obtain near-minimum-variance estimates of the timing offset with respect to a given steady-state sampling criterion. Although we make certain independence assumptions between successive samples and postulate ideal decisions to obtain convenient analytical results, our simulations with a decision-directed reference and baud-to-baud adjustments yield very similar results. Convergence is exponential, and for small loop gains the residual jitter is proportional and convergence time is inversely proportional to the loop gain. The proposed algorithms are simple and economic to implement. They apply to binary or multilevel PAM signals as well as to partial response signals.
Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling The goal of decentralized optimization over a network is to optimize a global objective formed by a sum of local (possibly nonsmooth) convex functions using only local computation and communication. It arises in various application domains, including distributed tracking and localization, multi-agent coordination, estimation in sensor networks, and large-scale machine learning. We develop and analyze distributed algorithms based on dual subgradient averaging, and we provide sharp bounds on their convergence rates as a function of the network size and topology. Our analysis allows us to clearly separate the convergence of the optimization algorithm itself and the effects of communication dependent on the network structure. We show that the number of iterations required by our algorithm scales inversely in the spectral gap of the network, and confirm this prediction's sharpness both by theoretical lower bounds and simulations for various networks. Our approach includes the cases of deterministic optimization and communication, as well as problems with stochastic optimization and/or communication.
Architectural Evolution of Integrated M-Phase High-Q Bandpass Filters -phase bandpass filters (BPFs) are analyzed, and variations of the structure are proposed. For values of that are integer multiples of 4, the conventional -phase BPF structure is modified to take complex baseband impedances and frequency-translate their complex impedance response to the local oscillator frequency. Also, it is demonstrated how the -phase BPF can be modified to implement a high quality factor (Q) image-rejection BPF with quadrature RF inputs. In addition, we present high-Q BPFs whose center frequencies are equal to the sum or difference of the RF and IF (intermediate frequency) clocks. Such filters can be useful in heterodyne receiver architectures.
A Highly Adaptive Leader Election Algorithm for Mobile Ad Hoc Networks.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.05435
0.0275
0.011963
0.006061
0.003897
0.000833
0.000333
0
0
0
0
0
0
0
Secured Data Transmission Over Insecure Networks-on-Chip by Modulating Inter-Packet Delays As the network-on-chip (NoC) integrated into an SoC design can come from an untrusted third party, there is a growing risk that data integrity and security get compromised when supposedly sensitive data flows through such an untrusted NoC. We thus introduce a new method that can ensure secure and secret data transmission over such an untrusted NoC. Essentially, the proposed scheme relies on encoding binary data as delays between packets travelling across the source and destination pair. The maximum data transmission rate of this inter-packet-delay (IPD)-based communication channel can be determined from the analytical model developed in this article. To further improve the undetectability and robustness of the proposed data transmission scheme, a new block coding method and communication protocol are also proposed. Experimental results show that the proposed IPD-based method can achieve a packet error rate (PER) of as low as 0.3% and an effective throughput of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\boldsymbol {2.3\times 10^{5}}$ </tex-math></inline-formula> b/s, outperforming the methods of thermal covert channel, cache covert channel, and circuit-based encryption and, thus, is suitable for secure data transmission in unsecure systems.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Finite-time synchronization of fully complex-valued neural networks with fractional-order. In this paper, without separating complex-valued neural networks into two real-valued systems, the finite-time synchronization is addressed for a class of fully complex-valued neural networks with fractional-order. Firstly, a new fractional-order differential inequality is established to improve some existing results in the real domain. Besides, to avoid the traditional separation method, the sign function of complex numbers is proposed and some properties about it are derived. Under the proposed sign function framework, by designing some novel and effective control schemes, constructing nontrivial Lyapunov functions and developing some new inequality methods in complex domain, several criteria of finite-time synchronization are derived and the settling-time of synchronization is effectively estimated. Finally, the effectiveness of the theoretical results is demonstrated by some numerical examples.
Finite-time stabilization by state feedback control for a class of time-varying nonlinear systems. In this paper, finite-time stabilization is considered for a class of nonlinear systems dominated by a lower-triangular model with a time-varying gain. Based on the finite-time Lyapunov stability theorem and dynamic gain control design approach, state feedback finite-time stabilization controllers are proposed with gains being tuned online by two dynamic equations. Different from many existing finite-time control designs for lower-triangular nonlinear systems, the celebrated backstepping method is not utilized here. It is observed that our design procedure is much simpler, and the resulting control gains are in general not as high as those provided by the backstepping method. A simulation example is given to demonstrate the effectiveness of the proposed design procedure.
Robust stability of hopfield delayed neural networks via an augmented L-K functional. This paper focuses on the issue of robust stability of artificial delayed neural networks. A free-matrix-based inequality strategy is produced by presenting an arrangement of slack variables, which can be optimized by means of existing convex optimization algorithms. To reflect a large portion of the dynamical behaviors of the framework, uncertain parameters are considered. By constructing an augmented Lyapunov functional, sufficient conditions are derived to guarantee that the considered neural systems are completely stable. The conditions are presented in the form of as linear matrix inequalities (LMIs). Finally, numerical cases are given to show the suitability of the results presented.
Finite-time stabilization for a class of nonlinear systems via optimal control. In general, finite-time stabilization techniques can always stabilize a system if control cost is not considered. Considering the fact that control cost is a very important factor in control area, we investigate finite-time stabilization problem for a class of nonlinear systems in this paper, where the control cost can also be reduced. We formulate this problem into an optimal control problem, where the control functions are optimized such that the system can be stabilized with minimum control cost. Then, the control parameterization enhancing transform and the control parameterization method are applied to solve this problem. Two numerical examples are illustrated to show the effectiveness of the proposed method.
A Unified Framework Design for Finite-Time and Fixed-Time Synchronization of Discontinuous Neural Networks. In this article, the problems of finite-time/fixed-time synchronization have been investigated for discontinuous neural networks in the unified framework. To achieve the finite-time/fixed-time synchronization, a novel unified integral sliding-mode manifold is introduced, and corresponding unified control strategies are provided; some criteria are established for selecting suitable parameters for s...
A Fuzzy Lyapunov Function Method to Stability Analysis of Fractional-Order T–S Fuzzy Systems This article investigates the stability analysis and stabilization problems for fractional-order T–S fuzzy systems via fuzzy Lyapunov function method. A membership-function-dependent fuzzy Lyapunov function instead of the general quadratic Lyapunov function is employed to obtain the stability and stabilization criteria. Different from the general quadratic Lyapunov function, the fuzzy Lyapunov functions contain the product of three term functions. Since the general Leibniz formula cannot be satisfied for fractional derivative, the current results on the fractional derivative for the quadratic Lyapunov functions cannot be extended to the fuzzy Lyapunov functions. Therefore, to estimate the fractional derivative of fuzzy Lyapunov functions, the fractional derivative rule for the product of three term functions is proposed. Based on the proposed fractional derivative rule, the corresponding stability and stabilization criteria are established, which extend the existing results. Finally, two simulation examples are presented to illustrate the effectiveness of the proposed theoretical analysis.
Finite-time synchronization of nonidentical BAM discontinuous fuzzy neural networks with delays and impulsive effects via non-chattering quantized control •Two new inequalities are developed to deal with the mismatched coefficients of the fuzzy part.•A simple but robust quantized state feedback controller is designed to overcome the effects of discontinuous activations, time delay, and nonidentical coefficients simultaneously. The designed control schemes do not utilize the sign function and can save channel resources. Moreover, novel non-chattering quantized adaptive controllers are also considered to reduce the control cost.•By utilizing 1-norm analytical technique and comparison system method, the effect of impulses on the FTS is well coped with.•Without utilizing the finite-time stability theorem in [16], several FTS criteria are obtained. Moreover, the settling time is explicitly estimated. Results of this paper can easily be extended to FTS of other classical delayed impulsive NNs with or without nonidentical coefficients.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
Cellular Logic-in-Memory Arrays As a direct consequence of large-scale integration, many advantages in the design, fabrication, testing, and use of digital circuitry can be achieved if the circuits can be arranged in a two-dimensional iterative, or cellular, array of identical elementary networks, or cells. When a small amount of storage is included in each cell, the same array may be regarded either as a logically enhanced memory array, or as a logic array whose elementary gates and connections can be "programmed" to realize a desired logical behavior.
On implementing omega with weak reliability and synchrony assumptions We study the feasibility and cost of implementing Ω---a fundamental failure detector at the core of many algorithms---in systems with weak reliability and synchrony assumptions. Intuitively, Ω allows processes to eventually elect a common leader. We first give an algorithm that implements Ω in a weak system S where processes are synchronous, but: (a) any number of them may crash, and (b) only the output links of an unknown correct process are eventually timely (all other links can be asynchronous and/or lossy). This is in contrast to previous implementations of Ω which assume that a quadratic number of links are eventually timely, or systems that are strong enough to implement the eventually perfect failure detector P. We next show that implementing Ω in S is expensive: even if we want an implementation that tolerates just one process crash, all correct processes (except possibly one) must send messages forever; moreover, a quadratic number of links must carry messages forever. We then show that with a small additional assumption---the existence of some unknown correct process whose asynchronous links are lossy but fair---we can implement Ω efficiently: we give an algorithm for Ω such that eventually only one process (the elected leader) sends messages.
Bandwidth-efficient management of DHT routing tables Today an application developer using a distributed hash table (DHT) with n nodes must choose a DHT protocol from the spectrum between O(1) lookup protocols [9, 18] and O(log n) protocols [20-23, 25, 26]. O(1) protocols achieve low latency lookups on small or low-churn networks because lookups take only a few hops, but incur high maintenance traffic on large or high-churn networks. O(log n) protocols incur less maintenance traffic on large or high-churn networks but require more lookup hops in small networks. Accordion is a new routing protocol that does not force the developer to make this choice: Accordion adjusts itself to provide the best performance across a range of network sizes and churn rates while staying within a bounded bandwidth budget. The key challenges in the design of Accordion are the algorithms that choose the routing table's size and content. Each Accordion node learns of new neighbors opportunistically, in a way that causes the density of its neighbors to be inversely proportional to their distance in ID space from the node. This distribution allows Accordion to vary the table size along a continuum while still guaranteeing at most O(log n) lookup hops. The user-specified bandwidth budget controls the rate at which a node learns about new neighbors. Each node limits its routing table size by evicting neighbors that it judges likely to have failed. High churn (i.e., short node lifetimes) leads to a high eviction rate. The equilibrium between the learning and eviction processes determines the table size. Simulations show that Accordion maintains an efficient lookup latency versus bandwidth tradeoff over a wider range of operating conditions than existing DHTs.
Analysis and Design of Passive Polyphase Filters Passive RC polyphase filters (PPFs) are analyzed in detail in this paper. First, a method to calculate the output signals of an n-stage PPF is presented. As a result, all relevant properties of PPFs, such as amplitude and phase imbalance and loss, are calculated. The rules for optimal pole frequency planning to maximize the image-reject ratio provided by a PPF are given. The loss of PPF is divided into two factors, namely the intrinsic loss caused by the PPF itself and the loss caused by termination impedances. Termination impedances known a priori can be used to derive such component values, which minimize the overall loss. The effect of parasitic capacitance and component value deviation are analyzed and discussed. The method of feeding the input signal to the first PPF stage affects the mechanisms of the whole PPF. As a result, two slightly different PPF topologies can be distinguished, and they are separately analyzed and compared throughout this paper. A design example is given to demonstrate the developed design procedure.
High Frequency Buck Converter Design Using Time-Based Control Techniques Time-based control techniques for the design of high switching frequency buck converters are presented. Using time as the processing variable, the proposed controller operates with CMOS-level digital-like signals but without adding any quantization error. A ring oscillator is used as an integrator in place of conventional opamp-RC or G m-C integrators while a delay line is used to perform voltage to time conversion and to sum time signals. A simple flip-flop generates pulse-width modulated signal from the time-based output of the controller. Hence time-based control eliminates the need for wide bandwidth error amplifier, pulse-width modulator (PWM) in analog controllers or high resolution analog-to-digital converter (ADC) and digital PWM in digital controllers. As a result, it can be implemented in small area and with minimal power. Fabricated in a 180 nm CMOS process, the prototype buck converter occupies an active area of 0.24 mm2, of which the controller occupies only 0.0375 mm2. It operates over a wide range of switching frequencies (10-25 MHz) and regulates output to any desired voltage in the range of 0.6 V to 1.5 V with 1.8 V input voltage. With a 500 mA step in the load current, the settling time is less than 3.5 μs and the measured reference tracking bandwidth is about 1 MHz. Better than 94% peak efficiency is achieved while consuming a quiescent current of only 2 μA/MHz.
Neuropixels Data-Acquisition System: A Scalable Platform for Parallel Recording of 10,000+ Electrophysiological Signals. Although CMOS fabrication has enabled a quick evolution in the design of high-density neural probes and neural-recording chips, the scaling and miniaturization of the complete data-acquisition systems has happened at a slower pace. This is mainly due to the complexity and the many requirements that change depending on the specific experimental settings. In essence, the fundamental challenge of a n...
1.2
0.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
ELSA: Hardware-Software Co-design for Efficient, Lightweight Self-Attention Mechanism in Neural Networks The self-attention mechanism is rapidly emerging as one of the most important key primitives in neural networks (NNs) for its ability to identify the relations within input entities. The self-attention-oriented NN models such as Google Transformer and its variants have established the state-of-the-art on a very wide range of natural language processing tasks, and many other self-attention-oriented models are achieving competitive results in computer vision and recommender systems as well. Unfortunately, despite its great benefits, the self-attention mechanism is an expensive operation whose cost increases quadratically with the number of input entities that it processes, and thus accounts for a significant portion of the inference runtime. Thus, this paper presents ELSA (Efficient, Lightweight Self-Attention), a hardware-software co-designed solution to substantially reduce the runtime as well as energy spent on the self-attention mechanism. Specifically, based on the intuition that not all relations are equal, we devise a novel approximation scheme that significantly reduces the amount of computation by efficiently filtering out relations that are unlikely to affect the final output. With the specialized hardware for this approximate self-attention mechanism, ELSA achieves a geomean speedup of 58.1× as well as over three orders of magnitude improvements in energy efficiency compared to GPU on self-attention computation in modern NN models while maintaining less than 1% loss in the accuracy metric.
A bridging model for parallel computation, communication, and I/O
A Framework for Acceleration of CNN Training on Deeply-Pipelined FPGA Clusters with Work and Weight Load Balancing To improve flexibility and energy efficiency of Convolutional Neural Networks, a number of cloud computing service providers-including Microsoft, Amazon, and Alibaba-are using FPGA-based CNN accelerators. However, the growing size and complexity of neural networks, coupled with communication and off-chip memory bottlenecks, make it increasingly difficult for multi-FPGA designs to achieve high resource utilization and performance, especially when training. In this work, we present new results for a scalable framework, FPDeep, which helps users efficiently map CNN training logic to multiple FPGAs and automatically generates the resulting RTL implementation. FPDeep is equipped with two mechanisms to facilitate high-performance and energy-efficient training. First, FPDeep improves DSP slice utilization across FPGAs by balancing workload using dedicated partition and mapping strategies. Second, only on-chip memory is used in the CONV layers: a) FPDeep balances CNN weight allocation among FPGAs to improve BRAM utilization; b) training of CNNs is executed in a fine-grained pipelined manner, minimizing the time features need to be cached while waiting for back-propagation leading to a reduced storage demand. We evaluate our framework by training AlexNet, VGG-16, and VGG-19. Experimental results show FPDeep has good scalability to a large number of FPGAs, with the limiting factor being the inter-FPGA bandwidth. With 6 transceivers per FPGA, FPDeep shows linearity up to 83 FPGAs. FPDeep provides, on average, 6.36x higher energy efficiency than GPU servers.
Topic-to-Essay Generation with Neural Networks.
BAE: BERT-based Adversarial Examples for Text Classification.
EdgeBERT: Sentence-Level Energy Optimizations for Latency-Aware Multi-Task NLP Inference ABSTRACT Transformer-based language models such as BERT provide significant accuracy improvement to a multitude of natural language processing (NLP) tasks. However, their hefty computational and memory demands make them challenging to deploy to resource-constrained edge platforms with strict latency requirements. We present EdgeBERT, an in-depth algorithm-hardware co-design for latency-aware energy optimizations for multi-task NLP. EdgeBERT employs entropy-based early exit predication in order to perform dynamic voltage-frequency scaling (DVFS), at a sentence granularity, for minimal energy consumption while adhering to a prescribed target latency. Computation and memory footprint overheads are further alleviated by employing a calibrated combination of adaptive attention span, selective network pruning, and floating-point quantization. Furthermore, in order to maximize the synergistic benefits of these algorithms in always-on and intermediate edge computing settings, we specialize a 12nm scalable hardware accelerator system, integrating a fast-switching low-dropout voltage regulator (LDO), an all-digital phase-locked loop (ADPLL), as well as, high-density embedded non-volatile memories (eNVMs) wherein the sparse floating-point bit encodings of the shared multi-task parameters are carefully stored. Altogether, latency-aware multi-task NLP inference acceleration on the EdgeBERT hardware system generates up to 7 ×, 2.5 ×, and 53 × lower energy compared to the conventional inference without early stopping, the latency-unbounded early exit approach, and CUDA adaptations on an Nvidia Jetson Tegra X2 mobile GPU, respectively.
A domain-specific supercomputer for training deep neural networks Google's TPU supercomputers train deep neural networks 50x faster than general-purpose supercomputers running a high-performance computing benchmark.
A Case for Intelligent RAM Two trends call into question the current practice of microprocessors and DRAMs being fabricated as different chips on different fab lines: 1) the gap between processor and DRAM speed is growing at 50% per year; and 2) the size and organization of memory on a single DRAM chip is becoming awkward to use in a system, yet size is growing at 60% per year. Intelligent RAM, or IRAM, merges processing and memory into a single chip to lower memory latency, increase memory bandwidth, and improve energy efficiency as well as to allow more flexible selection of memory size and organization. In addition, IRAM promises savings in power and board area. We review the state of microprocessors and DRAMs today, explore some of the opportunities and challenges for IRAMs, and finally estimate performance and energy efficiency of three IRAM designs.
Efficient Processing of Deep Neural Networks: A Tutorial and Survey. Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and throughput without...
Directed diffusion for wireless sensor networking Advances in processor, memory, and radio technology will enable small and cheap nodes capable of sensing, communication, and computation. Networks of such nodes can coordinate to perform distributed sensing of environmental phenomena. In this paper, we explore the directed-diffusion paradigm for such coordination. Directed diffusion is data-centric in that all communication is for named data. All nodes in a directed-diffusion-based network are application aware. This enables diffusion to achieve energy savings by selecting empirically good paths and by caching and processing data in-network (e.g., data aggregation). We explore and evaluate the use of directed diffusion for a simple remote-surveillance sensor network analytically and experimentally. Our evaluation indicates that directed diffusion can achieve significant energy savings and can outperform idealized traditional schemes (e.g., omniscient multicast) under the investigated scenarios.
Multi-objective optimization using genetic algorithms: A tutorial Multi-objective formulations are realistic models for many complex engineering optimization problems. In many real-life problems, objectives under consideration conflict with each other, and optimizing a particular solution with respect to a single objective can result in unacceptable results with respect to the other objectives. A reasonable solution to a multi-objective problem is to investigate a set of solutions, each of which satisfies the objectives at an acceptable level without being dominated by any other solution. In this paper, an overview and tutorial is presented describing genetic algorithms (GA) developed specifically for problems with multiple objectives. They differ primarily from traditional GA by using specialized fitness functions and introducing methods to promote solution diversity.
Accelerating microprocessor silicon validation by exposing ISA diversity Microprocessor design validation is a time consuming and costly task that tends to be a bottleneck in the release of new architectures. The validation step that detects the vast majority of design bugs is the one that stresses the silicon prototypes by applying huge numbers of random tests. Despite its bug detection capability, this step is constrained by extreme computing needs for random tests simulation to extract the bug-free memory image for comparison with the actual silicon image. We propose a self-checking method that accelerates silicon validation and significantly increases the number of applied random tests to improve bug detection efficiency and reduce time-to-market. Analysis of four major ISAs (ARM, MIPS, PowerPC, and x86) reveals their inherent diversity: more than three quarters of the instructions can be replaced with equivalent instructions. We exploit this property in post-silicon validation and propose a methodology for the generation of random tests that detect bugs by comparing results of equivalent instructions. We support our bug detection method in hardware with a light-weight mechanism which, in case of a mismatch, replays the random test replacing the offending instruction with its equivalent. Our bug detection method and corresponding hardware significantly accelerate the post-silicon validation process. Evaluation of the method on an x86 microprocessor model demonstrates its efficiency over simulation-based and self-checking alternatives, in terms of bug detection capabilities and validation time speedup.
A CMOS IQ Digital Doherty Transmitter using modulated tuning capacitors This paper presents a new approach to increase the output power and to enhance the drain efficiency of Direct Digital RF Modulators (DDRM). Two differential four-phase DDRMs are organized in a Doherty-like configuration using two different transformers. The modulated tuning capacitors concept is proposed to achieve a high efficiency at maximum output power and at back-off. To demonstrate this principle, a 2 GHz IQ Digital Doherty Transmitter with on-chip transformers has been integrated in 90 nm CMOS Technology. The digital IQ transmitter achieves a maximum output power of 24.8 dBm with 26% drain efficiency and 26% drain efficiency at 6 dB back-off. With a 10 MHz RFBW multi-tone OFDM signal, the transmitter consumes 176 mA from a 2.4 V supply. It achieves 18.8 dBm RMS output power with 18% average drain efficiency.
An Event-Driven Quasi-Level-Crossing Delta Modulator Based on Residue Quantization This article introduces a digitally intensive event-driven quasi-level-crossing (quasi-LC) delta-modulator analog-to-digital converter (ADC) with adaptive resolution (AR) for Internet of Things (IoT) wireless networks, in which minimizing the average sampling rate for sparse input signals can significantly reduce the power consumed in data transmission, processing, and storage. The proposed AR quasi-LC delta modulator quantizes the residue voltage signal with a 4-bit asynchronous successive-approximation-register (SAR) sub-ADC, which enables a straightforward implementation of LC and AR algorithms in the digital domain. The proposed modulator achieves data compression by means of a globally signal-dependent average sampling rate and achieves AR through a digital multi-level comparison window that overcomes the tradeoff between the dynamic range and the input bandwidth in the conventional LC ADCs. Engaging the AR algorithm reduces the average sampling rate by a factor of 3 at the edge of the modulator’s signal bandwidth. The proposed modulator is fabricated in 28-nm CMOS and achieves a peak SNDR of 53 dB over a signal bandwidth of 1.42 MHz while consuming 205 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> and an active area of 0.0126 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> .
1.033333
0.033333
0.033333
0.033333
0.033333
0.022222
0.008333
0.000952
0
0
0
0
0
0
CHIPKIT: An agile, reusable open-source framework for rapid test chip development The current trend for domain-specific architectures has led to renewed interest in research test chips to demonstrate new specialized hardware. Tapeouts also offer huge pedagogical value garnered from real hands-on exposure to the whole system stack. However, success with tapeouts requires hard-earned experience, and the design process is time consuming and fraught with challenges. Therefore, custom chips have remained the preserve of a small number of research groups, typically focused on circuit design research. This article describes the CHIPKIT framework: a reusable SoC subsystem which provides basic IO, an on-chip programmable host, off-chip hosting, memory, and peripherals. This subsystem can be readily extended with new IP blocks to generate custom test chips. Central to CHIPKIT is an agile RTL development flow, including a code generation tool called VGEN. Finally, we discuss best practices for full-chip validation across the entire design cycle.
Chipyard: Integrated Design, Simulation, and Implementation Framework for Custom SoCs Continued improvement in computing efficiency requires functional specialization of hardware designs. Agile hardware design methodologies have been proposed to alleviate the increased design costs of custom silicon architectures, but their practice thus far has been accompanied with challenges in integration and validation of complex systems-on-a-chip (SoCs). We present the Chipyard framework, an integrated SoC design, simulation, and implementation environment for specialized compute systems. Chipyard includes configurable, composable, open-source, generator-based IP blocks that can be used across multiple stages of the hardware development flow while maintaining design intent and integration consistency. Through cloud-hosted FPGA accelerated simulation and rapid ASIC implementation, Chipyard enables continuous validation of physically realizable customized systems.
Threaded code The concept of “threaded code” is presented as an alternative to machine language code. Hardware and software realizations of it are given. In software it is realized as interpretive code not needing an interpreter. Extensions and optimizations are mentioned.
Formal verification in hardware design: a survey In recent years, formal methods have emerged as an alternative approach to ensuring the quality and correctness of hardware designs, overcoming some of the limitations of traditional validation techniques such as simulation and testing.There are two main aspects to the application of formal methods in a design process: the formal framework used to specify desired properties of a design and the verification techniques and tools used to reason about the relationship between a specification and a corresponding implementation. We survey a variety of frameworks and techniques proposed in the literature and applied to actual designs. The specification frameworks we describe include temporal logics, predicate logic, abstraction and refinement, as well as containment between &ohgr;-regular languages. The verification techniques presented include model checking, automata-theoretic techniques, automated theorem proving, and approaches that integrate the above methods.In order to provide insight into the scope and limitations of currently available techniques, we present a selection of case studies where formal methods were applied to industrial-scale designs, such as microprocessors, floating-point hardware, protocols, memory subsystems, and communications hardware.
The Oracle Problem in Software Testing: A Survey Testing involves examining the behaviour of a system in order to discover potential faults. Given an input for a system, the challenge of distinguishing the corresponding desired, correct behaviour from potentially incorrect behavior is called the “test oracle problem”. Test oracle automation is important to remove a current bottleneck that inhibits greater overall test automation. Without test or...
BROOM: An Open-Source Out-of-Order Processor With Resilient Low-Voltage Operation in 28-nm CMOS The Berkeley resilient out-of-order machine (BROOM) is a resilient, wide-voltage-range implementation of an open-source out-of-order (OoO) RISC-V processor implemented in an ASIC flow. A 28-nm test-chip contains a BOOM OoO core and a 1-MiB level-2 (L2) cache, enhanced with architectural error tolerance for low-voltage operation. It was implemented by using an agile design methodology, where the initial OoO architecture was transformed to perform well in a high-performance, low-leakage CMOS process, informed by synthesis, place, and route data by using foundry-provided standard-cell library and memory compiler. The two-person-team productivity was improved in part thanks to a number of open-source artifacts: The Chisel hardware construction language, the RISC-V instruction set architecture, the rocket-chip SoC generator, and the open-source BOOM core. The resulting chip, taped out using TSMC’s 28-nm HPM process, runs at 1.0 GHz at 0.9 V, and is able to operate down to 0.47 V.
A Case for Accelerating Software RTL Simulation RTL simulation is a critical tool for hardware design but its current slow speed often bottlenecks the whole design process. Simulation speed becomes even more crucial for agile and open-source hardware design methodologies, because the designers not only want to iterate on designs quicker, but they may also have less resources with which to simulate them. In this article, we execute multiple simulators and analyze them with hardware performance counters. We find some open-source simulators not only outperform a leading commercial simulator, they also achieve comparable or higher instruction throughput on the host processor. Although advanced optimizations may increase the complexity of the simulator, they do not significantly hinder instruction throughput. Our findings make the case that there is significant room to accelerate software simulation and open-source simulators are a great starting point for researchers.
A Hybrid Systolic-Dataflow Architecture for Inductive Matrix Algorithms Dense linear algebra kernels are critical for wireless, and the oncoming proliferation of 5G only amplifies their importance. Due to the inductive nature of many such algorithms, parallelism is difficult to exploit: parallel regions have fine-grain producer/consumer interaction with iteratively changing depen-dence distance, reuse rate, and memory access patterns. This makes multi-threading impractical due to fine-grain synchronization, and vectorization ineffective due to the non-rectangular iteration domain. CPUs, DSPs, and GPUs perform order-of-magnitude below peak. Our insight is that if the nature of inductive dependences and memory accesses were explicit in the hardware/software interface, then a spatial architecture could efficiently execute parallel code regions. To this end, we first develop a novel execution model, inductive dataflow, where inductive dependence patterns and memory access patterns (streams) are first-order primitives. Second, we develop a hybrid spatial architecture combining systolic and tagged dataflow execution to attain high utilization at low energy and area cost. Finally, we create a scalable design through a novel vector-stream control model which amortizes control overhead both in time and spatially across architecture lanes. We evaluate our design, REVEL, with a full stack (compiler, ISA, simulator, RTL). Across a suite of linear algebra kernels, REVEL outperforms equally-provisioned DSPs by 4.6×-37×. Compared to state-of-the-art spatial architectures, REVEL is mean 3× faster. Compared to a set of ASICs, REVEL is only 2× the power and half the area.
Unreliable failure detectors for reliable distributed systems We introduce the concept of unreliable failure detectors and study how they can be used to solve Consensus in asynchronous systems with crash failures. We characterise unreliable failure detectors in terms of two properties—completeness and accuracy. We show that Consensus can be solved even with unreliable failure detectors that make an infinite number of mistakes, and determine which ones can be used to solve Consensus despite any number of crashes, and which ones require a majority of correct processes. We prove that Consensus and Atomic Broadcast are reducible to each other in asynchronous systems with crash failures; thus, the above results also apply to Atomic Broadcast. A companion paper shows that one of the failure detectors introduced here is the weakest failure detector for solving Consensus [Chandra et al. 1992].
Scalable video coding and transport over broadband wireless networks With the emergence of broadband wireless networks and increasing demand of multimedia information on the Internet, wireless multimedia services are foreseen to become widely deployed in the next decade. Real-time video transmission typically has requirements on quality of service (QoS). However, wireless channels are unreliable and the channel bandwidth varies with time, which may cause severe deg...
Enhancing peer-to-peer content discovery techniques over mobile ad hoc networks Content dissemination over mobile ad hoc networks (MANETs) is usually performed using peer-to-peer (P2P) networks due to its increased resiliency and efficiency when compared to client-server approaches. P2P networks are usually divided into two types, structured and unstructured, based on their content discovery strategy. Unstructured networks use controlled flooding, while structured networks use distributed indexes. This article evaluates the performance of these two approaches over MANETs and proposes modifications to improve their performance. Results show that unstructured protocols are extremely resilient, however they are not scalable and present high energy consumption and delay. Structured protocols are more energy-efficient, however they have a poor performance in dynamic environments due to the frequent loss of query messages. Based on those observations, we employ selective forwarding to decrease the bandwidth consumption in unstructured networks, and introduce redundant query messages in structured P2P networks to increase their success ratio.
A 40 Gb/s CMOS Serial-Link Receiver With Adaptive Equalization and Clock/Data Recovery This paper presents a 40 Gb/s serial-link receiver including an adaptive equalizer and a CDR circuit. A parallel-path equalizing filter is used to compensate the high-frequency loss in copper cables. The adaptation is performed by only varying the gain in the high-pass path, which allows a single loop for proper control and completely removes the RC filters used for separately extracting the high-...
VirFID: A Virtual Force (VF)-based Interest-Driven moving phenomenon monitoring scheme using multiple mobile sensor nodes. In this paper, we study mobile sensor network (MSN) architectures and algorithms for monitoring a moving phenomenon in an unknown and open area using a group of autonomous mobile sensor (MS) nodes. Monitoring a moving phenomenon involves challenges due to limited communication/sensing ranges of MS nodes, the phenomenon’s unpredictable changes in distribution and position, and the lack of information on the sensing area. To address the challenges and meet the objective of the maximization of weighted sensing coverage, we propose a novel scheme, namely VirFID (Virtual Force (VF)-based Interest-Driven moving phenomenon monitoring). In VirFID, MS nodes move toward the positions where more interesting sensing data can be obtained by utilizing the virtual force, which is calculated based on the distance between MS nodes and sensed values in the area of interest. MS nodes also perform network-wise information sharing to increase the weighted sensing coverage. Depending on the level of information used, three variants of VirFID are evaluated: VirFID-LIB (Local Information-Based), VirFID-GHL (Global Highest and Lowest), and VirFID-IBN (Interests at Boundary Nodes). In addition, an analytical model for estimating MSN speed is designed. Simulations are performed to compare the performance of three VirFID variants with other approaches. Our simulation results show that VirFID algorithms outperform other schemes in terms of the weighted coverage efficiency, and VirFID-IBN achieves the highest weighted coverage efficiency among VirFID variants.
A 0.5 V 10-bit 3 MS/s SAR ADC With Adaptive-Reset Switching Scheme and Near-Threshold Voltage-Optimized Design Technique This brief presents a 10-bit ultra-low power energy-efficient successive approximation register (SAR) analog-to-digital converter (ADC). A new adaptive-reset switching scheme is proposed to reduce the switching energy of the capacitive digital-to-analog converter (CDAC). The proposed adaptive-reset switching scheme reduces the average switching energy of the CDAC by 90% compared to the conventional scheme without the common-mode voltage variation. In addition, the near-threshold voltage (NTV)-optimized digital library is adopted to alleviate the performance degradation in the ultra-low supply voltage while simultaneously increasing the energy efficiency. The NTV-optimized design technique is also introduced to the bootstrapped switch design to improve the linearity of the sample-and-hold circuit. The test chip is fabricated in a 65 nm CMOS, and its core area is 0.022 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> . At a supply of 0.5 V and sampling speed of 3 MS/s, the SAR ADC achieves an ENOB of 8.78 bit and consumes <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$3.09~{\boldsymbol{\mu }}\text{W}$ </tex-math></inline-formula> . The resultant Walden figure-of-merit (FoM) is 2.34 fJ/conv.-step.
1.11
0.11
0.1
0.1
0.1
0.1
0.1
0.003333
0
0
0
0
0
0
Lattice scheduling and covert channels. The lattice scheduler is a process scheduler that reduces the performance penalty of certain covert-channel countermeasures by scheduling processes using access class attributes. The lattice scheduler was developed as part of the covert-channel analysis of the VAX security kernel. The VAX security kernel is a virtual-machine monitor security kernel for the VAX architecture designed to meet the requirements of the A1 rating from the US National Computer Security Center. After describing the cache channel, a description is given of how this channel can be exploited using the VAX security kernel as an example. The author discusses how this channel can be closed and the performance effects of closing the channel. The lattice scheduler is introduced, and its use in closing the cache channel is demonstrated. Finally, the work illustrates the operation of the lattice scheduler through an extended example and concludes with a discussion of some variations of the basic scheduling algorithm
Page placement algorithms for large real-indexed caches When a computer system supports both paged virtual memory and large real-indexed caches, cache performance depends in part on the main memory page placement. To date, most operating systems place pages by selecting an arbitrary page frame from a pool of page frames that have been made available by the page replacement algorithm. We give a simple model that shows that this naive (arbitrary) page placement leads to up to 30% unnecessary cache conflicts. We develop several page placement algorithms, called careful-mapping algorithms, that try to select a page frame (from the pool of available page frames) that is likely to reduce cache contention. Using trace-driven simulation, we find that careful mapping results in 10–20% fewer (dynamic) cache misses than naive mapping (for a direct-mapped real-indexed multimegabyte cache). Thus, our results suggest that careful mapping by the operating system can get about half the cache miss reduction that a cache size (or associativity) doubling can.
TILE64 - Processor: A 64-Core SoC with Mesh Interconnect The TILE64TM processor is a multicore SoC targeting the high-performance demands of a wide range of embedded applications across networking and digital multimedia applications. A figure shows a block diagram with 64 tile processors arranged in an 8x8 array. These tiles connect through a scalable 2D mesh network with high-speed I/Os on the periphery. Each general-purpose processor is identical and capable of running SMP Linux.
Whispers in the Hyper-Space: High-Bandwidth and Reliable Covert Channel Attacks Inside the Cloud Privacy and information security in general are major concerns that impede enterprise adaptation of shared or public cloud computing. Specifically, the concern of virtual machine (VM) physical co-residency stems from the threat that hostile tenants can leverage various forms of side channels (such as cache covert channels) to exfiltrate sensitive information of victims on the same physical system. However, on virtualized x86 systems, covert channel attacks have not yet proven to be practical, and thus the threat is widely considered a “potential risk.” In this paper, we present a novel covert channel attack that is capable of high-bandwidth and reliable data transmission in the cloud. We first study the application of existing cache channel techniques in a virtualized environment and uncover their major insufficiency and difficulties. We then overcome these obstacles by: 1) redesigning a pure timing-based data transmission scheme, and 2) exploiting the memory bus as a high-bandwidth covert channel medium. We further design and implement a robust communication protocol and demonstrate realistic covert channel attacks on various virtualized x86 systems. Our experimental results show that covert channels do pose serious threats to information security in the cloud. Finally, we discuss our insights on covert channel mitigation in virtualized environments.
Kitsune: An Ensemble of Autoencoders for Online Network Intrusion Detection. Neural networks have become an increasingly popular solution for network intrusion detection systems (NIDS). Their capability of learning complex patterns and behaviors make them a suitable solution for differentiating between normal traffic and network attacks. However, a drawback of neural networks is the amount of resources needed to train them. Many network gateways and routers devices, which could potentially host an NIDS, simply do not have the memory or processing power to train and sometimes even execute such models. More importantly, the existing neural network solutions are trained in a supervised manner. Meaning that an expert must label the network traffic and update the model manually from time to time. In this paper, we present Kitsune: a plug and play NIDS which can learn to detect attacks on the local network, without supervision, and in an efficient online manner. Kitsuneu0027s core algorithm (KitNET) uses an ensemble of neural networks called autoencoders to collectively differentiate between normal and abnormal traffic patterns. KitNET is supported by a feature extraction framework which efficiently tracks the patterns of every network channel. Our evaluations show that Kitsune can detect various attacks with a performance comparable to offline anomaly detectors, even on a Raspberry PI. This demonstrates that Kitsune can be a practical and economic NIDS.
A Survey of Microarchitectural Timing Attacks and Countermeasures on Contemporary Hardware. Microarchitectural timing channels expose hidden hardware states though timing. We survey recent attacks that exploit microarchitectural features in shared hardware, especially as they are relevant for cloud computing. We classify types of attacks according to a taxonomy of the shared resources leveraged for such attacks. Moreover, we take a detailed look at attacks used against shared caches. We survey existing countermeasures. We finally discuss trends in attacks, challenges to combating them, and future directions, especially with respect to hardware support.
Hey, you, get off of my cloud: exploring information leakage in third-party compute clouds Third-party cloud computing represents the promise of outsourcing as applied to computation. Services, such as Microsoft's Azure and Amazon's EC2, allow users to instantiate virtual machines (VMs) on demand and thus purchase precisely the capacity they require when they require it. In turn, the use of virtualization allows third-party cloud providers to maximize the utilization of their sunk capital costs by multiplexing many customer VMs across a shared physical infrastructure. However, in this paper, we show that this approach can also introduce new vulnerabilities. Using the Amazon EC2 service as a case study, we show that it is possible to map the internal cloud infrastructure, identify where a particular target VM is likely to reside, and then instantiate new VMs until one is placed co-resident with the target. We explore how such placement can then be used to mount cross-VM side-channel attacks to extract information from a target VM on the same machine.
SafeSpec: Banishing the Spectre of a Meltdown with Leakage-Free Speculation. Speculative attacks, such as Spectre and Meltdown, target speculative execution to access privileged data and leak it through a side-channel. In this paper, we introduce (SafeSpec), a new model for supporting speculation in a way that is immune to the side-channel leakage by storing side effects of speculative instructions in separate structures until they commit. Additionally, we address the possibility of a covert channel from speculative instructions to committed instructions before these instructions are committed. We develop a cycle accurate model of modified design of an x86-64 processor and show that the performance impact is negligible.
A Hardware Architecture for Switch-Level Simulation The Mossim Simulation Engine (MSE) is a hardware accelerator for performing switch-level simulation of MOS VLSI circuits [1], [2]. Functional partitioning of the MOSSIM algorithm and specialized circuitry are used by the MSE to achieve a performance improvement of > 300 over a VAX 11/780 executing the MOSSIM II program. Several MSE processors can be connected in parallel to achieve additional speedup. A virtual processor mechanism allows the MSE to simulate large circuits with the size of the circuit limited only by the amount of backing store available to hold the circuit description.
Assessing merged DRAM/logic technology This paper describes the impact of DRAM process on the logic circuit performance of Memory/Logic Merged Integrated Circuit and the alternative circuit design technology to offset the performance penalty. Extensive circuit and routing simulations have been performed to study the logic circuit performance degradation when the merged chip is implemented on DRAM process. Three logic processes ( 0.5, 0.6 and 0.8 μm) and two corresponding contemporary DRAM (64 and 256 Mb) processes have been selected for the study knowing that the performance difference between the logic and DRAM processes can be extrapolated for the advanced processes. The simulation results show that the logic circuit performance is degraded by about 22% on DRAM process including the increased interconnect delay due to less interconnect layers available in the DRAM process. The silicon area is increased up to 80% depending on the number of net and components when implementing a logic circuit in a DRAM process. Simulation results show that the performance penalty can be well offset if the same circuit used in the simulation is implemented using dynamic circuit techniques. Keywords DRAM/Logic merged technology Embedded DRAM DRAM Memory VLSI
On the complexity of division and set joins in the relational algebra We show that any expression of the relational division operator in the relational algebra with union, difference, projection, selection, and equijoins, must produce intermediate results of quadratic size. To prove this result, we show a dichotomy theorem about intermediate sizes of relational algebra expressions (they are either all linear, or at least one is quadratic); we link linear relational algebra expressions to expressions using only semijoins instead of joins; and we link these semijoin algebra expressions to the guarded fragment of first-order logic.
Comparing performances of logistic regression, classification and regression tree, and neural networks for predicting coronary artery disease In this study, performances of classification techniques were compared in order to predict the presence of coronary artery disease (CAD). A retrospective analysis was performed in 1245 subjects (865 presence of CAD and 380 absence of CAD). We compared performances of logistic regression (LR), classification and regression tree (CART), multi-layer perceptron (MLP), radial basis function (RBF), and self-organizing feature maps (SOFM). Predictor variables were age, sex, family history of CAD, smoking status, diabetes mellitus, systemic hypertension, hypercholesterolemia, and body mass index (BMI). Performances of classification techniques were compared using ROC curve, Hierarchical Cluster Analysis (HCA), and Multidimensional Scaling (MDS). Areas under the ROC curves are 0.783, 0.753, 0.745, 0.721, and 0.675, respectively for MLP, LR, CART, RBF, and SOFM. MLP was found the best technique to predict presence of CAD in this data set, given its good classificatory performance. MLP, CART, LR, and RBF performed better than SOFM in predicting CAD in according to HCA and MDS.
Lossy data compression using FDCT for haptic communication In this paper, a DCT-based lossy haptic data compression method for a haptic communication systems is proposed to reduce the data size flowing between a master and a slave system. The calculation load for the DCT can be high, and the performance and the stability of the system can deteriorate due to the high calculation load. In order to keep the system a hard real-time system and the performance high, a fast calculation algorithm for DCT is adopted, and the calculation load is balanced for several sampling periods. The time delay introduced through the compression/expansion of the haptic data is predictable and constant. The time delay, therefore, can be compensated by a time delay compensator. Furthermore, since the delay in this paper is small enough, stable contact with a hard environment is achieved without any time delay compensator. The validity of the proposed lossy haptic data compression method is shown through simulation and experimental results.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.019915
0.020294
0.015385
0.015385
0.015385
0.00875
0.004854
0.000429
0.000019
0
0
0
0
0
An Inductorless Fractional-N PLL Using Harmonic-Mixer-Based Dual Feedback and High-OSR Delta-Sigma-Modulator with Phase-Domain Filtering An inductorless Harmonic-Mixer (HM) based fractional-N PLL is proposed. It simultaneously achieves Delta-Sigma-Modulator (DSM) noise suppression and a wide loop bandwidth by employing a high-OSR DSM and nested-PLL-based phase-domain lowpass filtering inside of the dual-feedback architecture. A 2.8-3.5 GHz prototype implemented in 65-nm CMOS achieves a -227.6dB FoM with an 8 MHz bandwidth, with no calibration circuitry and with a compact layout containing no inductors.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Computing Shortest, Fastest, and Foremost Journeys in Dynamic Networks ABSTRACT New technologies and the deployment of mobile and nomadic,services are driving the emergence of complex communications networks, that have a highly dynamic behavior. This naturally engenders new route-discovery problems under changing conditions over these networks. Unfortunately, the temporal variations in the network topology are hard to be eectively captured in a classical graph model. In this paper, we use and extend a recently proposed graph theoretic model, which helps capture the evolving characteristic of such networks, in order to propose and formally analyze least cost journeys (the analog of paths in usual graphs) in a class of dynamic networks, where the changes in the topology can be predicted in advance. Cost measures investigated here are hop count (shortest journeys), arrival date (foremost journeys), and time span (fastest journeys). Keywords: dynamic networks, routing, evolving graphs, graph theoretical models, LEO satellite networks, fixed-schedule dynamic networks This work was partially supported by the Color action Dynamic and the European FET
Scalable Routing in Cyclic Mobile Networks The nonexistence of an end-to-end path poses a challenge in adapting traditional routing algorithms to delay-tolerant networks (DTNs). Previous works have covered centralized routing approaches based on deterministic mobility, ferry-based routing with deterministic or semideterministic mobility, flooding-based approaches for networks with general mobility, and probability-based routing for semideterministic mobility models. Unfortunately, none of these methods can guarantee both scalability and delivery. In this paper, we extend the investigation of scalable deterministic routing in DTNs with repetitive mobility based on our previous works. Instead of routing with global contact knowledge, we propose a routing algorithm that routes on contact information compressed by three combined methods. We address the challenge of efficient information aggregation and compression in the time-space domain while maintaining critical information for efficient routing. Then, we extend it to handle a moderate level of uncertainty in contact prediction. Analytical studies and simulation results show that the performance of our proposed routing algorithm, DTN hierarchical routing (DHR), is comparable to that of the optimal time-space Dijkstra algorithm in terms of delay and hop count. At the same time, the per-node storage overhead is substantially reduced and becomes scalable.
Measuring Temporal Lags in Delay-Tolerant Networks Delay-tolerant networks (DTNs) are characterized by a possible absence of end-to-end communication routes at any instant. Yet, connectivity can be achieved over time and space, leading to evaluate a given route both in terms of topological length or temporal length. The problem of measuring temporal distances in a social network was recently addressed through postprocessing contact traces like email data sets, in which all contacts are punctual in time (i.e., they have no duration). We focus on the distributed version of this problem and address the more general case that contacts can have arbitrary durations (i.e., be nonpunctual). Precisely, we ask whether each node in a network can track in real time how "out-of-dateâ it is with respect to every other. Although relatively straightforward with punctual contacts, this problem is substantially more complex with arbitrarily long contacts: consecutive hops of an optimal route may either be disconnected (intermittent connectedness of DTNs) or connected (i.e., the presence of links overlaps in time, implying a continuum of path opportunities). The problem is further complicated (and yet, more realistic) by the fact that we address continuous-time systems and nonnegligible message latencies (time to propagate a single message over a single link); however, this latency is assumed fixed and known. We demonstrate the problem is solvable in this general context by generalizing a time-measurement vector clock construct to the case of "nonpunctualâ causality, which results in a tool we call T-Clocks, of independent interest. The remainder of the paper shows how T-Clocks can be leveraged to solve concrete problems such as learning foremost broadcast trees (BTs), network backbones, or fastest broadcast trees in periodic DTNs.
Exploration of Constantly Connected Dynamic Graphs Based on Cactuses. We study the problem of exploration by a mobile entity (agent) of a class of dynamic networks, namely constantly connected dynamic graphs. This problem has already been studied in the case where the agent knows the dynamics of the graph and the underlying graph is a ring of n vertices [5]. In this paper, we consider the same problem and we suppose that the underlying graph is a cactus graph (a connected graph in which any two simple cycles have at most one vertex in common). We propose an algorithm that allows the agent to explore these dynamic graphs in at most 2(O)(root log n)(n) time units. We show that the lower bound of the algorithm is 2(Omega)(root log n)(n) time units.
Computing the Dynamic Diameter of Non-Deterministic Dynamic Networks is Hard. A dynamic network is a communication network whose communication structure can evolve over time. The dynamic diameter is the counterpart of the classical static diameter, it is the maximum time needed for a node to causally influence any other node in the network. We consider the problem of computing the dynamic diameter of a given dynamic network. If the evolution is known a priori, that is if the network is deterministic, it is known it is quite easy to compute this dynamic diameter. If the evolution is not known a priori, that is if the network is non-deterministic, we show that the problem is hard to solve or approximate. In some cases, this hardness holds also when there is a static connected subgraph for the dynamic network. In this note, we consider an important subfamily of non-deterministic dynamic networks: the time-homogeneous dynamic networks. We prove that it is hard to compute and approximate the value of the dynamic diameter for time-homogeneous dynamic networks.
Searching for black-hole faults in a network using multiple agents We consider a fixed communication network where (software) agents can move freely from node to node along the edges. A black hole is a faulty or malicious node in the network such that if an agent enters this node, then it immediately “dies.” We are interested in designing an efficient communication algorithm for the agents to identify all black holes. We assume that we have k agents starting from the same node s and knowing the topology of the whole network. The agents move through the network in synchronous steps and can communicate only when they meet in a node. At the end of the exploration of the network, at least one agent must survive and must know the exact locations of the black holes. If the network has n nodes and b black holes, then any exploration algorithm needs Ω(n/k + Db) steps in the worst-case, where Db is the worst case diameter of the network with at most b nodes deleted. We give a general algorithm which completes exploration in O((n/k)logn/loglogn + bDb) steps for arbitrary networks, if b≤k/2. In the case when b≤k/2, and , we give a refined algorithm which completes exploration in asymptotically optimal O(n/k) steps.
Eventual Leader Election in Evolving Mobile Networks.
Communication-efficient leader election and consensus with limited link synchrony We study the degree of synchrony required to implement the leader election failure detector Ω and to solve consensus in partially synchronous systems. We show that in a system with n processes and up to f process crashes, one can implement Ω and solve consensus provided there exists some (unknown) correct process with f outgoing links that are eventually timely. In the special case where f = 1 , an important case in practice, this implies that to implement Ω and solve consensus it is sufficient to have just one eventually timely link -- all the other links in the system, Θ(n2) of them, may be asynchronous. There is no need to know which link p → q is eventually timely, when it becomes timely, or what is its bound on message delay. Surprisingly, it is not even required that the source p or destination q of this link be correct: either p or q may actually crash, in which case the link p → q is eventually timely in a trivial way, and it is useless for sending messages. We show that these results are in a sense optimal: even if every process has f - 1 eventually timely links, neither Ω nor consensus can be solved. We also give an algorithm that implements Ω in systems where some correct process has f outgoing links that are eventually timely, such that eventually only f links carry messages, and we show that this is optimal. For f = 1 , this algorithm ensures that all the links, except for one, eventually become quiescent.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
The M-Machine multicomputer The M-Machine is an experimental multicomputer being developed to test architectural concepts motivated by the constraints of modern semiconductor technology and the demands of programming systems. The M-Machine computing nodes are connected with a 3-D mesh network; each node is a multithreaded processor incorporating 9 function units, on-chip cache, and local memory. The multiple function units are used to exploit both instruction-level and thread-level parallelism. A user accessible message passing system yields fast communication and synchronization between nodes. Rapid access to remote memory is provided transparently to the user with a combination of hardware and software mechanisms. This paper presents the architecture of the M-Machine and describes how its mechanisms attempt to maximize both single thread performance and overall system throughput. The architecture is complete and the MAP chip, which will serve as the M-Machine processing node, is currently being implemented.
Controlling the cost of reliability in peer-to-peer overlays Structured peer-to-peer overlay networks provide a useful substrate for building distributed applications but there are general concerns over the cost of maintaining these overlays. The current approach is to configure the overlays statically and conservatively to achieve the desired reliability even under uncommon adverse conditions. This results in high cost in the common case, or poor reliability in worse than expected conditions. We analyze the cost of overlay maintenance in realistic dynamic environments and design novel techniques to reduce this cost by adapting to the operating conditions. With our techniques, the concerns over the overlay maintenance cost are no longer warranted. Simulations using real traces show that they enable high reliability and performance even in very adverse conditions with low maintenance cost.
Chameleon: a dual-mode 802.11b/Bluetooth receiver system design In this paper, an approach to map the Bluetooth and 802.11b standards specifications into an architecture and specifications for the building blocks of a dual-mode direct conversion receiver is proposed. The design procedure focuses on optimizing the performance in each operating mode while attaining an efficient dual-standard solution. The impact of the expected receiver nonidealities and the characteristics of each building block are evaluated through bit-error-rate simulations. The proposed receiver design is verified through a fully integrated implementation from low-noise amplifier to analog-to-digital converter using IBM 0.25-μm BiCMOS technology. Experimental results from the integrated prototype meet the specifications from both standards and are in good agreement with the target sensitivity.
Optimum insertion/deletion point selection for fractional sample rate conversion In this paper, an optimum insertion/deletion point selection algorithm for fractional sample rate conversion (SRC) is proposed. The direct insertion/deletion technique achieves low complexity and low power consumption as compared to the other fractional SRC methods. Using a multiple set insertion/deletion technique is efficient for reduction of distortion caused by the insertion/deletion step. When the conversion factor is (N ±¿)/N, the number of possible patterns of insertion/deletion points and the number of combinations for multiple set inserters/deleters grow as ¿ increases. The proposed algorithm minimizes the distortion due to SRC by selecting the patterns and the combinations.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.028217
0.028217
0.026077
0.022222
0.022222
0.014554
0.006667
0.00028
0.000013
0
0
0
0
0
Low Power CMOS-Based Hall Sensor with Simple Structure Using Double-Sampling Delta-Sigma ADC. A CMOS (Complementary metal-oxide-semiconductor) Hall sensor with low power consumption and simple structure is introduced. The tiny magnetic signal from Hall device could be detected by a high-resolution delta-sigma ADC in presence of offset and flickering noise. Also, the offset as well as the flickering noise are effectively suppressed by the current spinning technique combined with double sampling switches of the ADC. The double sampling scheme of the ADC reduces the operating frequency and helps to reduce the power consumption. The prototype Hall sensor is fabricated in a 0.18-mu m CMOS process, and the measurement shows detection range of +/- 150 mT and sensitivity of 110 mu V/mT. The size of active area is 0.7 mm(2), and the total power consumption is 4.9 mW. The proposed system is advantageous not only for low power consumption, but also for small sensor size due to its simplicity.
A highly sensitive CMOS digital Hall sensor for low magnetic field applications. Integrated CMOS Hall sensors have been widely used to measure magnetic fields. However, they are difficult to work with in a low magnetic field environment due to their low sensitivity and large offset. This paper describes a highly sensitive digital Hall sensor fabricated in 0.18 mu m high voltage CMOS technology for low field applications. The sensor consists of a switched cross-shaped Hall plate and a novel signal conditioner. It effectively eliminates offset and low frequency 1/f noise by applying a dynamic quadrature offset cancellation technique. The measured results show the optimal Hall plate achieves a high current related sensitivity of about 310 V/AT. The whole sensor has a remarkable ability to measure a minimum +/- 2 mT magnetic field and output a digital Hall signal in a wide temperature range from -40 degrees C to 120 degrees C.
A Fast T&H Overcurrent Detector for a Spinning Hall Current Sensor With Ping-Pong and Chopping Techniques This paper presents a fast spinning-current Hall sensor with 568 ns overall delay for sub-microsecond overcurrent detection (OCD) in a magnetic current sensor. By combining the continuous-time chopping techniques and discrete-time dynamic offset cancellation techniques, the spinning frequency of 250 kHz does not limit the sensor speed. The proposed track-and-hold (T&H) ping-pong comparators extend the usage of auto-zeroing techniques for sensor interface applications. The design achieves a magnetic residual offset of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$85~\mu \text{T}$ </tex-math></inline-formula> (mean) and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$79~\mu \text{T}$ </tex-math></inline-formula> ( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1\sigma$ </tex-math></inline-formula> ), while the offset drifts only <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$0.68~\mu \text{T}/^{\circ }\text{C}$ </tex-math></inline-formula> (mean) and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$0.27~\mu \text{T}/^{\circ }\text{C}$ </tex-math></inline-formula> ( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1\sigma$ </tex-math></inline-formula> ) from −40 °C to 150 °C. In addition, a background switched-capacitor filter breaks the limitation of high-frequency errors on conventional correlated double sampling techniques. The design thus reduces the input-referred noise to <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$136~\mu \text{T}_{\mathrm {rms}}$ </tex-math></inline-formula> with a bandwidth of 1.7 MHz, while consuming at least 30% less power than the other state-of-the-art designs. Moreover, the analog stress compensation with temperature coefficient (TC) correction guarantees an overall threshold error within ±4% over package stress and temperature.
A CMOS Current-Mode Magnetic Hall Sensor With Integrated Front-End A Hall magnetic sensor working in the current domain and its electronic interface are presented. The paper describes the physical sensor design and implementation in a standard CMOS technology, the transistor level design of its high sensitive front-end together with the sensor experimental characterization. The current-mode Hall sensor and the analog readout circuit have been fabricated using a 0.18- CMOS technology. The sensor uses the current spinning technique to compensate for the offset and provides a differential current as an output signal. The measured sensor power consumption and residual offset are 120 and 50 , respectively.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Software complexity measurement Inappropriate use of software complexity measures can have large, damaging effects by rewarding poor programming practices and demoralizing good programmers. Software complexity measures must be critically evaluated to determine the ways in which they can best be used.
Information-driven dynamic sensor collaboration This article overviews the information-driven approach to sensor collaboration in ad hoc sensor networks. The main idea is for a network to determine participants in a "sensor collaboration" by dynamically optimizing the information utility of data for a given cost of communication and computation. A definition of information utility is introduced, and several approximate measures of the information utility are developed for reasons of computational tractability. We illustrate the use of this approach using examples drawn from tracking applications
ImageNet Classification with Deep Convolutional Neural Networks. We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0%, respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.
Estimation of entropy and mutual information We present some new results on the nonparametric estimation of entropy and mutual information. First, we use an exact local expansion of the entropy function to prove almost sure consistency and central limit theorems for three of the most commonly used discretized information estimators. The setup is related to Grenander's method of sieves and places no assumptions on the underlying probability measure generating the data. Second, we prove a converse to these consistency theorems, demonstrating that a misapplication of the most common estimation techniques leads to an arbitrarily poor estimate of the true information, even given unlimited data. This "inconsistency" theorem leads to an analytical approximation of the bias, valid in surprisingly small sample regimes and more accurate than the usual 1/N formula of Miller and Madow over a large region of parameter space. The two most practical implications of these results are negative: (1) information estimates in a certain data regime are likely contaminated by bias, even if "bias-corrected" estimators are used, and (2) confidence intervals calculated by standard techniques drastically underestimate the error of the most common estimation methods.Finally, we note a very useful connection between the bias of entropy estimators and a certain polynomial approximation problem. By casting bias calculation problems in this approximation theory framework, we obtain the best possible generalization of known asymptotic bias results. More interesting, this framework leads to an estimator with some nice properties: the estimator comes equipped with rigorous bounds on the maximum error over all possible underlying probability distributions, and this maximum error turns out to be surprisingly small. We demonstrate the application of this new estimator on both real and simulated data.
Collection and Analysis of Microprocessor Design Errors Research on practical design verification techniques has long been impeded by the lack of published, detailed error data. We have systematically collected design error data over the last few years from a number of academic microprocessor design projects. We analyzed this data and report on the lessons learned in the collection effort.
The challenges of merging two similar structured overlays: a tale of two networks Structured overlay networks is an important and interesting primitive that can be used by diverse peer-to-peer applications. Multiple overlays can result either because of network partitioning or (more likely) because different groups of peers build such overlays separately before coming in contact with each other and wishing to coalesce the overlays together. This paper is a first look into how multiple such overlays (all using the same protocols) can be merged – which is critical for usability and adoption of such an internet-scale distributed system. We elaborate how two networks using the same protocols can be merged, looking specifically into two different overlay design principles: (i) maintaining the ring invariant and (ii) structural replications, either of which are used in various overlay networks to guarantee functional correctness in a highly dynamic (membership changes) environment. Particularly, we show that ring based networks can not operate until the merger operation completes. In contrast, from the perspective of individual peers in structurally replicated overlays there is no disruption of service, and they can continue to discover and access resources that they could originally do before the beginning of the merger process, even though resources from the other network become visible only gradually with the progress of the merger process.
CORDIC-based computation of ArcCos and ArcSin CORDIC--based algorithms to compute cos^{-1}(t), sin^{-1}(t) and sqrt{1-t^{2}} are proposed. The implementation requires a standard CORDIC module plus a module to compute the direction of rotation, this being the same hardware required for the extended CORDIC vectoring, recently proposed by the authors. Although these functions can be obtained as a special case of this extended vectoring, the specific algorithm we propose here presents two significant improvements: (1) it achieves an angle granularity of 2^{-n} using the same datapath width as the standard CORDIC algorithm (about n bits, instead of about 2n which would be required using the extended vectoring), and (2) no repetitions of iterations are needed. The proposed algorithm is compatible with the extended vectoring and, in contrast with previous implementations, the number of iterations and the delay of each iteration are the same as for the conventional CORDIC algorithm.
Kinesis: a security incident response and prevention system for wireless sensor networks This paper presents Kinesis, a security incident response and prevention system for wireless sensor networks, designed to keep the network functional despite anomalies or attacks and to recover from attacks without significant interruption. Due to the deployment of sensor networks in various critical infrastructures, the applications often impose stringent requirements on data reliability and service availability. Given the failure- and attack-prone nature of sensor networks, it is a pressing concern to enable the sensor networks provide continuous and unobtrusive services. Kinesis is quick and effective in response to incidents, distributed in nature, and dynamic in selecting response actions based on the context. It is lightweight in terms of response policy specification, and communication and energy overhead. A per-node single timer based distributed strategy to select the most effective response executor in a neighborhood makes the system simple and scalable, while achieving proper load distribution and redundant action optimization. We implement Kinesis in TinyOS and measure its performance for various application and network layer incidents. Extensive TOSSIM simulations and testbed experiments show that Kinesis successfully counteracts anomalies/attacks and behaves consistently under various attack scenarios and rates.
Multi-Channel Neural Recording Implants: A Review. The recently growing progress in neuroscience research and relevant achievements, as well as advancements in the fabrication process, have increased the demand for neural interfacing systems. Brain-machine interfaces (BMIs) have been revealed to be a promising method for the diagnosis and treatment of neurological disorders and the restoration of sensory and motor function. Neural recording implants, as a part of BMI, are capable of capturing brain signals, and amplifying, digitizing, and transferring them outside of the body with a transmitter. The main challenges of designing such implants are minimizing power consumption and the silicon area. In this paper, multi-channel neural recording implants are surveyed. After presenting various neural-signal features, we investigate main available neural recording circuit and system architectures. The fundamental blocks of available architectures, such as neural amplifiers, analog to digital converters (ADCs) and compression blocks, are explored. We cover the various topologies of neural amplifiers, provide a comparison, and probe their design challenges. To achieve a relatively high SNR at the output of the neural amplifier, noise reduction techniques are discussed. Also, to transfer neural signals outside of the body, they are digitized using data converters, then in most cases, the data compression is applied to mitigate power consumption. We present the various dedicated ADC structures, as well as an overview of main data compression methods.
1.2
0.1
0.1
0.05
0
0
0
0
0
0
0
0
0
0
Schedulability analysis of dynamic priority real-time systems with contention In multicore scheduling of hard real-time systems, there is a significant source of unpredictability due to the interference caused by the sharing of hardware resources. This paper deals with the schedulability analysis of multicore systems where the interference caused by the sharing of hardware resources is taken into account. We rely on a task model where this interference is integrated in a general way, without depending on a specific type of hardware resource. There are similar approaches but they consider fixed priorities. The schedulability analysis is provided for dynamic priorities assuming constrained deadlines and based on the demand bound function. We propose two techniques, one more pessimistic than the other but with a lower computational cost. We evaluate the two proposals for different task allocators in terms of the increased estimated utilization. The results show that both bounds are valid for ensuring schedulability although, as expected, one is tighter than the other. The evaluation also serves to compare allocators to see which one produces less interference.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
An Integrated Discrete-Time Delay-Compensating Technique for Large-Array Beamformers This paper implements a wide aperture high-resolution true time delay for frequency-uniform beamforming gain in large-scale phased arrays. We propose a baseband discrete-time delay-compensating technique to augment the conventional phase-shift-based analog or hybrid beamformers. A generalized design methodology is first developed to compare delay-compensating analog or hybrid beamforming architecture with their digital counterpart for a given number of antenna elements, modulation bandwidth, ADC dynamic range, and delay resolution. This paper shows that delay-compensating analog or hybrid beamformers are more energy-efficient for high dynamic-range applications compared to true-time-delay digital beamformers. To demonstrate the feasibility of our proposed technique, a four-element analog delay-compensating baseband beamformer in 65-nm CMOS is prototyped. A time-interleaved switched-capacitor array implements the discrete-time delay-compensating beamformer with a wide delay range of 15-ns and 5-ps resolution. Measured power consumption is 47 mW with frequency-uniform array gain over 100-MHz modulated bandwidth, independent of angle of arrival. The proposed delay compensation scheme is scalable to accommodate the delay differences for large antenna arrays with higher range/resolution ENOB compared with prior art.
A 0.1–3.5-GHz Duty-Cycle Measurement and Correction Technique in 130-nm CMOS A duty-cycle correction technique using a novel pulsewidth modification cell is demonstrated across a frequency range of 100 MHz–3.5 GHz. The technique works at frequencies where most digital techniques implemented in the same technology node fail. An alternative method of making time domain measurements such as duty cycle and rise/fall times from the frequency domain data is introduced. The data are obtained from the equipment that has significantly lower bandwidth than required for measurements in the time domain. An algorithm for the same has been developed and experimentally verified. The correction circuit is implemented in a 0.13- $mu text{m}$ CMOS technology and occupies an area of 0.011 mm 2 . It corrects to a residual error of less than 1%. The extent of correction is limited by the technology at higher frequencies.
A 1-GHz 16-Element Four-Beam True-Time-Delay Digital Beamformer Phased arrays are widely used due to their low power and small area usage. However, phased arrays depend on the narrowband assumption and, therefore, are not suitable for high-bandwidth applications. Emerging communication standards require increasingly higher bandwidths for improved data rates, which results in a need for timed arrays. However, high power consumption and large area requirements are drawbacks of radio frequency (RF) timed arrays. To resolve these issues, we introduce the first true-time-delay digital beamforming IC, which eliminates beam squinting error by adopting a baseband true-time-delay technique. Furthermore, we present a constant output impedance current-steering digital-to-analog converter (DAC), which improves the spurious-free dynamic range (SFDR) of a bandpass delta–sigma modulator by 7 dB. Due to the new DAC architecture, the 16-element beamformer improves SFDR by 13.7 dB from the array. Measured error vector magnitudes (EVMs) are better than 37 dB for 5-MBd quadratic-amplitude modulation (QAM)-64, QAM-256, and QAM-512. The prototype beamformer achieves nearly ideal beam patterns for both conventional and adaptive beamforming (i.e., adaptive nulling and tapering). The difference between normalized measured beam patterns and normalized simulated beam patterns is less than 1 dB within the 3-dB beamwidth. The beamformer, including 16 bandpass analog-to-digital converters (ADCs) occupies 0.29 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> and consumes 453 mW in total power.
A Four-Element 500-MHz 40-mW 6-bit ADC-Enabled Time-Domain Spatial Signal Processor Next-generation wireless communication requires phased-array systems with large modulated bandwidths and high energy efficiency, ensuring Gb/s data communication. Conventional phase-shifter-based arrays result in frequency-dependent processing and, therefore, beam-squinting in an array. This work demonstrates a four-element 500-MHz modulated bandwidth true-time-delay-based ADC-enabled spatial sign...
Silicon-Based Ultra-Wideband Beam-Forming Ultra-wideband (UWB) beam-forming, a special class of multiple-antenna systems, allows for high azimuth and depth resolutions in ranging and imaging applications. This paper reports a fully integrated UWB beam-former featuring controllable true time delay and power gain. Several system and circuit level parameters and characterization methods influencing the design and testing of UWB beam-formers are discussed. A UWB beam-former prototype for imaging applications has been fabricated with the potential to yield 20 mm of range resolution and a 7deg angular resolution from a four-element array with 10 mm element spacing. The UWB beam-former accomplishes a 4-bit delay variation for a total of 64 ps of achievable group delay with a 4-ps resolution, a 5-dB gain variation in 1-dB steps, and a worst case -3-dB gain bandwidth of 13 GHz. Overall operation is achieved by the integration of a 3-bit tapped delay trombone-type structure with a 4-ps variable delay resolution, a 1-bit, 32-ps fixed delay coplanar-type structure, and a variable-gain distributed amplifier. The prototype chip fabricated in a 0.18 mum BiCMOS SiGe process occupies 1.6 mm2 of silicon area and consumes 87.5 mW from a 2.5-V supply at the maximum gain setting of 10 dB
Compact Cascadable g m -C All-Pass True Time Delay Cell With Reduced Delay Variation Over Frequency At low-GHz frequencies, analog time-delay cells realized by LC delay lines or transmission lines are unpractical in CMOS, due to their large size. As an alternative, delays can be approximated by all-pass filters exploiting transconductors and capacitors (g m -C filters). This paper presents an easily cascadable compact g m -C all-pass filter cell for 1-2.5 GHz. Compared to previous g m -RC and g m -C filter cells, it achieves at least 5x larger frequency range for the same relative delay variation, while keeping gain variation within 1 dB. This paper derives design equations for the transfer function and several non-idealities. Circuit techniques to improve phase linearity and reduce delay variation over frequency, are also proposed. A 160 nm CMOS chip with maximum delay of 550 ps is demonstrated with monotonous delay steps of 13 ps (41 steps) and an RMS delay variation error of less than 10 ps over more than an octave in frequency (1-2.5 GHz). The delay per area is at least 50x more than for earlier chips. The all-pass cells are used to realize a four element timed-array receiver IC. Measurement results of the beam pattern demonstrate the wideband operation capability of the g m -RC time delay cell and timed-array IC-architecture.
Cache operations by MRU change The performance of set associative caches is analyzed. The method used is to group the cache lines into regions according to their positions in the replacement stacks of a cache, and then to observe how the memory access of a CPU is distributed over these regions. Results from the preserved CPU traces show that the memory accesses are heavily concentrated on the most recently used (MRU) region in the cache. The concept of MRU change is introduced; the idea is to use the event that the CPU accesses a non-MRU line to approximate the time the CPU is changing its working set. The concept is shown to be useful in many aspects of cache design and performance evaluation, such as comparison of various replacement algorithms, improvement of prefetch algorithms, and speedup of cache simulation.
Broadband MIMO-OFDM Wireless Communications Orthogonal frequency division multiplexing (OFDM) is a popular method for high data rate wireless transmission. OFDM may be combined with antenna arrays at the transmitter and receiver to increase the diversity gain and/or to enhance the system capacity on time-varying and frequency-selective channels, resulting in a multiple-input multiple-output (MIMO) configuration. The paper explores various p...
Supporting Aggregate Queries Over Ad-Hoc Wireless Sensor Networks We show how the database community's notion of a generic query interface for data aggregation can be applied to ad-hoc networks of sensor devices. As has been noted in the sensor network literature, aggregation is important as a data reduction tool; networking approaches, however, have focused on application specific solutions, whereas our in-network aggregation approach is driven by a general purpose, SQL-style interface that can execute queries over any type of sensor data while providing opportunities for significant optimization. We present a variety of techniques to improve the reliability and performance of our solution. We also show how grouped aggregates can be efficiently computed and offer a comparison to related systems and database projects.
Exploiting ILP, TLP, and DLP with the polymorphous TRIPS architecture This paper describes the polymorphous TRIPS architecture which can be configured for different granularities and types of parallelism. TRIPS contains mechanisms that enable the processing cores and the on-chip memory system to be configured and combined in different modes for instruction, data, or thread-level parallelism. To adapt to small and large-grain concurrency, the TRIPS architecture contains four out-of-order, 16-wide-issue Grid Processor cores, which can be partitioned when easily extractable fine-grained parallelism exists. This approach to polymorphism provides better performance across a wide range of application types than an approach in which many small processors are aggregated to run workloads with irregular parallelism. Our results show that high performance can be obtained in each of the three modes--ILP, TLP, and DLP-demonstrating the viability of the polymorphous coarse-grained approach for future microprocessors.
A 10-Gb/s CMOS clock and data recovery circuit with a half-rate binary phase/frequency detector A 10-Gb/s phase-locked clock and data recovery circuit incorporates a multiphase LC oscillator and a half-rate phase/frequency detector with automatic data retiming. Fabricated in 0.18-μm CMOS technology in an area of 1.75×1.55 mm2, the circuit exhibits a capture range of 1.43 GHz, an rms jitter of 0.8 ps, a peak-to-peak jitter of 9.9 ps, and a bit error rate of 10-9 with a pseudorandom bit sequence of 223-1. The power dissipation excluding the output buffers is 91 mW from a 1.8-V supply.
A Dht-Based Discovery Service For The Internet Of Things Current trends towards the Future Internet are envisaging the conception of novel services endowed with context-aware and autonomic capabilities to improve end users' quality of life. The Internet of Things paradigm is expected to contribute towards this ambitious vision by proposing models and mechanisms enabling the creation of networks of "smart things" on a large scale. It is widely recognized that efficient mechanisms for discovering available resources and capabilities are required to realize such vision. The contribution of this work consists in a novel discovery service for the Internet of Things. The proposed solution adopts a peer-to-peer approach for guaranteeing scalability, robustness, and easy maintenance of the overall system. While most existing peer-to-peer discovery services proposed for the IoT support solely exact match queries on a single attribute (i. e., the object identifier), our solution can handle multiattribute and range queries. We defined a layered approach by distinguishing three main aspects: multiattribute indexing, range query support, peer-to-peer routing. We chose to adopt an over-DHT indexing scheme to guarantee ease of design and implementation principles. We report on the implementation of a Proof of Concept in a dangerous goods monitoring scenario, and, finally, we discuss test results for structural properties and query performance evaluation.
Implementation of LTE SC-FDMA on the USRP2 software defined radio platform In this paper we discuss the implementation of a Single Carrier Frequency Division Multiple Access (SC-FDMA) transceiver running over the Universal Software Radio Peripheral 2 (USRP2). SC-FDMA is the air interface which has been selected for the uplink in the latest Long Term Evolution (LTE) standard. In this paper we derive an AWGN channel model for SC-FDMA transmission, which is useful for benchmarking experimental results. In our implementation, we deal with signal scaling, equalization and partial synchronization to realize SC-FDMA transmission over a noisy channel at rates up to 5.184 Mbit/s. Experimental results on the Bit Error Rate (BER) versus Signal-to-Noise Ratio (SNR) are presented and compared to theoretical and simulated performance.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.2
0.2
0.2
0.2
0.1
0.066667
0
0
0
0
0
0
0
0
Bandwidth Limitation for the Constant Envelope Components of an OFDM Signal in a LINC Architecture The linear amplification using non-linear components (LINC) power amplifier architecture uses saturated amplifiers to achieve high efficiency and linearity. However, the non-linear operations for LINC separation lead to bandwidth expansion. The expanded bandwidth causes problems for the analog circuitry and effectively places a limit on the modulation bandwidth of the transmitted signal. We show that for multi-carrier signals, it is the phase component of the transmitted modulation that dominates the spectrum of the LINC components. A novel bandwidth reduction scheme is then proposed for these components. The scheme repetitively limits the bandwidth of the LINC components while inhibiting envelope variations. Measurements show a 46% bandwidth reduction while still meeting the WLAN spectrum mask. This implies an increase of 85% in the modulation bandwidth of the transmitted signal.
SFDR-bandwidth limitations for high speed high resolution current steering CMOS D/A converters Although very high update rates are achieved in recent publications on high resolution D/A converters, the bottleneck in the design is to achieve a high spurious free output signal bandwidth. The influence of the dynamic output impedance on the chip performance has been analyzed and has been identified as an important limitation for the spurious free dynamic range (SFDR) of high resolution DAC's. Based on the presented analysis an optimized topology is proposed
High-Throughput Signal Component Separator for Asymmetric Multi-Level Outphasing Power Amplifiers This paper presents an energy-efficient high-throughput and high-precision signal component separator (SCS) chip design for the asymmetric-multilevel-outphasing (AMO) power amplifier. It uses a fixed-point piece-wise linear functional approximation developed to improve the hardware efficiency of the outphasing signal processing functions. The chip is fabricated in 45 nm SOI CMOS process and the SCS consumes an active area of 1.5 mm2. The new algorithm enables the SCS to run at a throughput of 3.4 GSamples/s producing the phases with 12-bit accuracy. Compared to traditional low-throughput AMO SCS implementations, at 0.8 GSamples/s this design improves the area efficiency by 25× and the energy-efficiency by 2×. This fastest high-precision SCS to date enables a new class of high-throughput mm-wave and base station transmitters that can operate at high area, energy and spectral efficiency.
Predistortion of Digital RF PWM Signals Considering Conditional Memory The trend in transmitter systems is to move the digital domain closer towards the antenna using digital modulators and drivers to reduce circuit complexity and to save power. A common assumption made is that they are capable of generating ideal pulses and thus do not suffer from analog imperfections. But the output signals of real drivers for high frequency operation are not perfectly rectangular anymore, which leads to distortion lowering the signal quality. In this paper the general properties of high frequency digital driver circuits operating at 2.6 GHz are analyzed and the impact of the different effects is presented. The predistortion of such drivers in the context of digital discrete time RF PWM modulators is studied. It has been found that conventional sample based predistortion can only correct the driver nonlinearity from -29 dBc to -49 dBc for the example considered using a 40 MHz bandwidth signal at 2.6 GHz. Therefore a special predistortion scheme considering the impact of pulses adjacent to the other samples is proposed. The mitigation of effects due to the discrete time nature of the signal is considered and discussed in detail. The capabilities of the proposed predistortion scheme are verified by extensive simulations as well as by measurements. By applying the proposed predistortion concept the spectral quality can be further improved to -66 dBc. In addition different scenarios with limited resolution and a carrier frequency offset are analyzed.
Mismatch-based timing errors in current steering DACs Current Steering Digital-to-Analog Converters (CS-DAC) are important ingredients in many high-speed data converters. Various types of timing errors such as mismatch based timing errors limit broad-band performance. A framework of timing errors is presented here and it is used to analyze these errors. The extracted relationship between performance, block requirements and architecture (e.g segmentation) gives insight on design tradeoffs in Nyquist DACs and multi-bit current-based ΣΔ Modulators.
An Efficient Mixed-Signal 2.4-GHz Polar Power Amplifier in 65-nm CMOS Technology A 65-nm digitally modulated polar transmitter incorporates a fully integrated, efficient 2.4-GHz switching Inverse Class-D power amplifier. Low-power digital filtering on the amplitude path helps remove spectral images for coexistence. The transmitter integrates the complete LO distribution network and digital drivers. Operating from a 1-V supply, the PA has 21.8-dBm peak output power with 44% efficiency. Simple static predistortion helps the transmitter meet EVM and mask requirements of 802.11g 54-Mb/s WLAN data with 18% average efficiency.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
GPUWattch: enabling energy optimizations in GPGPUs General-purpose GPUs (GPGPUs) are becoming prevalent in mainstream computing, and performance per watt has emerged as a more crucial evaluation metric than peak performance. As such, GPU architects require robust tools that will enable them to quickly explore new ways to optimize GPGPUs for energy efficiency. We propose a new GPGPU power model that is configurable, capable of cycle-level calculations, and carefully validated against real hardware measurements. To achieve configurability, we use a bottom-up methodology and abstract parameters from the microarchitectural components as the model's inputs. We developed a rigorous suite of 80 microbenchmarks that we use to bound any modeling uncertainties and inaccuracies. The power model is comprehensively validated against measurements of two commercially available GPUs, and the measured error is within 9.9% and 13.4% for the two target GPUs (GTX 480 and Quadro FX5600). The model also accurately tracks the power consumption trend over time. We integrated the power model with the cycle-level simulator GPGPU-Sim and demonstrate the energy savings by utilizing dynamic voltage and frequency scaling (DVFS) and clock gating. Traditional DVFS reduces GPU energy consumption by 14.4% by leveraging within-kernel runtime variations. More finer-grained SM cluster-level DVFS improves the energy savings from 6.6% to 13.6% for those benchmarks that show clustered execution behavior. We also show that clock gating inactive lanes during divergence reduces dynamic power by 11.2%.
Self-stabilizing systems in spite of distributed control The synchronization task between loosely coupled cyclic sequential processes (as can be distinguished in, for instance, operating systems) can be viewed as keeping the relation “the system is in a legitimate state” invariant. As a result, each individual process step that could possibly cause violation of that relation has to be preceded by a test deciding whether the process in question is allowed to proceed or has to be delayed. The resulting design is readily—and quite systematically—implemented if the different processes can be granted mutually exclusive access to a common store in which “the current system state” is recorded.
Barrier certificates for nonlinear model validation Methods for model validation of continuous-time nonlinear systems with uncertain parameters are presented in this paper. The methods employ functions of state-parameter-time, termed barrier certificates, whose existence proves that a model and a feasible parameter set are inconsistent with some time-domain experimental data. A very large class of models can be treated within this framework; this includes differential-algebraic models, models with memoryless/dynamic uncertainties, and hybrid models. Construction of barrier certificates can be performed by convex optimization, utilizing recent results on the sum of squares decomposition of multivariate polynomials.
Practical Mitigations for Timing-Based Side-Channel Attacks on Modern x86 Processors This paper studies and evaluates the extent to which automated compiler techniques can defend against timing-based side-channel attacks on modern x86 processors. We study how modern x86 processors can leak timing information through side-channels that relate to control flow and data flow. To eliminate key-dependent control flow and key-dependent timing behavior related to control flow, we propose the use of if-conversion in a compiler backend, and evaluate a proof-of-concept prototype implementation. Furthermore, we demonstrate two ways in which programs that lack key-dependent control flow and key-dependent cache behavior can still leak timing information on modern x86 implementations such as the Intel Core 2 Duo, and propose defense mechanisms against them.
Distributed Primal-Dual Subgradient Method for Multiagent Optimization via Consensus Algorithms. This paper studies the problem of optimizing the sum of multiple agents' local convex objective functions, subject to global convex inequality constraints and a convex state constraint set over a network. Through characterizing the primal and dual optimal solutions as the saddle points of the Lagrangian function associated with the problem, we propose a distributed algorithm, named the distributed primal-dual subgradient method, to provide approximate saddle points of the Lagrangian function, based on the distributed average consensus algorithms. Under Slater's condition, we obtain bounds on the convergence properties of the proposed method for a constant step size. Simulation examples are provided to demonstrate the effectiveness of the proposed method.
Kinesis: a security incident response and prevention system for wireless sensor networks This paper presents Kinesis, a security incident response and prevention system for wireless sensor networks, designed to keep the network functional despite anomalies or attacks and to recover from attacks without significant interruption. Due to the deployment of sensor networks in various critical infrastructures, the applications often impose stringent requirements on data reliability and service availability. Given the failure- and attack-prone nature of sensor networks, it is a pressing concern to enable the sensor networks provide continuous and unobtrusive services. Kinesis is quick and effective in response to incidents, distributed in nature, and dynamic in selecting response actions based on the context. It is lightweight in terms of response policy specification, and communication and energy overhead. A per-node single timer based distributed strategy to select the most effective response executor in a neighborhood makes the system simple and scalable, while achieving proper load distribution and redundant action optimization. We implement Kinesis in TinyOS and measure its performance for various application and network layer incidents. Extensive TOSSIM simulations and testbed experiments show that Kinesis successfully counteracts anomalies/attacks and behaves consistently under various attack scenarios and rates.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.22
0.22
0.22
0.22
0.073889
0.018333
0
0
0
0
0
0
0
0
Computationally efficient algorithm for reducing the complexity of software radio receiver's filter bank In this paper, a computationally efficient method for extracting individual radio channels from the output of the wideband analog to digital converter (ADC) is presented. In a software radio, the extraction of individual channels from the output of the wideband ADC is by far the most computationally demanding task; hence it is very important to devise computationally efficient algorithms for this task. We proposed a new algorithm by assuming the symmetric signal with periods of the length-P (number of coefficients in low pass filter prototype) as an input signal to the subsampled filter bank. Also we divide the complex input x[n] into real and imaginary parts, then we perform operations in each part using two parallel filter banks. Finally, we add the outputs in two parts. By employing this algorithm to the subsampled filter bank channelizer, the complexity of the proposed algorithm was reduced by considerable amount of 81%.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Towards a scatter-gather architecture: hardware and software issues The on-node performance of High performance computing (HPC) applications is traditionally dominated by memory operations. Put simply, memory is what these applications "do." Unfortunately, they don't do it well. Caches, our first line of attack in the battle for memory performance, often throw away most of the data they fetch before using it. Processor cores, one of our most expensive resources, spend an inordinate amount of time performing simple address computations. Addressing these issues will require new approaches to how on-chip memory is organized and how memory operations are performed. Under Project 38, a joint Department of Energy / Department of Defense architectural resarch project, we have focused on exploring what a flexible in-memory scatter-gather architecture could look like in the context of several important HPC applications.
GP-SIMD Processing-in-Memory GP-SIMD, a novel hybrid general-purpose SIMD computer architecture, resolves the issue of data synchronization by in-memory computing through combining data storage and massively parallel processing. GP-SIMD employs a two-dimensional access memory with modified SRAM storage cells and a bit-serial processing unit per each memory row. An analytic performance model of the GP-SIMD architecture is presented, comparing it to associative processor and to conventional SIMD architectures. Cycle-accurate simulation of four workloads supports the analytical comparison. Assuming a moderate die area, GP-SIMD architecture outperforms both the associative processor and conventional SIMD coprocessor architectures by almost an order of magnitude while consuming less power.
Evolution of Memory Architecture Computer memories continue to serve the role that they first served in the electronic discrete variable automatic computer (EDVAC) machine documented by John von Neumann, namely that of supplying instructions and operands for calculations in a timely manner. As technology has made possible significantly larger and faster machines with multiple processors, the relative distance in processor cycles ...
Rebooting the Data Access Hierarchy of Computing Systems We have been experiencing two very important movements in computing. On the one hand, a tremendous amount of resource has been invested into innovative applications such as first-principle-based methods, deep learning and cognitive computing. On the other hand, the industry has been taking a technological path where application performance and energy efficiency vary by more than two orders of magnitude depending on their parallelism, heterogeneity, and locality. We envision that a "perfect storm" is coming because of the interaction between these two movements. Many of these new and high-valued applications need to touch a very large amount of data with little data reuse and data movement has become the dominating factor for both power and performance of these applications. It will be critical to match the compute throughput to the data access bandwidth and to locate the compute near data. Much has been and continuously needs to be learned about algorithms, languages, compilers and hardware architecture in this movement. What are the killer applications that may become the new driver for future technology development? How hard is it to program existing systems to address the data movement issues today? How will we program these systems in the future? How will innovations in memory devices present further opportunities and challenges in designing new systems? What is the impact on long-term software engineering cost of applications? In this paper, we present some lessons learned as we design the IBM-Illinois C3SR (Center for Cognitive Computing Systems Research) Erudite system inside this perfect storm.
Hyper-Ap: Enhancing Associative Processing Through A Full-Stack Optimization Associative processing (AP) is a promising PIM paradigm that overcomes the von Neumann bottleneck (memory wall) by virtue of a radically different execution model. By decomposing arbitrary computations into a sequence of primitive memory operations (i.e., search and write), AP’s execution model supports concurrent SIMD computations in-situ in the memory array to eliminate the need for data movement. This execution model also provides a native support for flexible data types and only requires a minimal modification on the existing memory design (low hardware complexity). Despite these advantages, the execution model of AP has two limitations that substantially increase the execution time, i.e., 1) it can only search a single pattern in one search operation and 2) it needs to perform a write operation after each search operation. In this paper, we propose the Highly Performant Associative Processor (Hyper- AP) to fully address the aforementioned limitations. The core of Hyper- AP is an enhanced execution model that reduces the number of search and write operations needed for computations, thereby reducing the execution time. This execution model is generic and improves the performance for both CMOS-based and RRAM-based AP, but it is more beneficial for the RRAMbased AP due to the substantially reduced write operations. We then provide complete architecture and micro-architecture with several optimizations to efficiently implement Hyper-AP. In order to reduce the programming complexity, we also develop a compilation framework so that users can write C-like programs with several constraints to run applications on Hyper- AP. Several optimizations have been applied in the compilation process to exploit the unique properties of Hyper- AP. Our experimental results show that, compared with the recent work IMP, Hyper- AP achieves up to 54×/4.4× better power-/area-efficiency for various representative arithmetic operations. For the evaluated benchmarks, Hyper-AP achieves 3.3× speedup and 23.8× energy reduction on average compared with IMP. Our evaluation also confirms that the proposed execution model is more beneficial for the RRAM-based AP than its CMOS-based counterpart.
Enabling Practical Processing in and near Memory for Data-Intensive Computing Modern computing systems suffer from the dichotomy between computation on one side, which is performed only in the processor (and accelerators), and data storage/movement on the other, which all other parts of the system are dedicated to. Due to this dichotomy, data moves a lot in order for the system to perform computation on it. Unfortunately, data movement is extremely expensive in terms of energy and latency, much more so than computation. As a result, a large fraction of system energy is spent and performance is lost solely on moving data in a modern computing system. In this work, we re-examine the idea of reducing data movement by performing Processing in Memory (PIM). PIM places computation mechanisms in or near where the data is stored (i.e., inside the memory chips, in the logic layer of 3D-stacked logic and DRAM, or in the memory controllers), so that data movement between the computation units and memory is reduced or eliminated. While the idea of PIM is not new, we examine two new approaches to enabling PIM: 1) exploiting analog properties of DRAM to perform massively-parallel operations in memory, and 2) exploiting 3D-stacked memory technology design to provide high bandwidth to in-memory logic. We conclude by discussing work on solving key challenges to the practical adoption of PIM.
McDRAM: Low Latency and Energy-Efficient Matrix Computations in DRAM. We propose a novel memory architecture for in-memory computation called McDRAM, where DRAM dies are equipped with a large number of multiply accumulate (MAC) units to perform matrix computation for neural networks. By exploiting high internal memory bandwidth and reducing offchip memory accesses, McDRAM realizes both low latency and energy efficient computation. In our experiments, we obtained the...
Practical Near-Data Processing for In-Memory Analytics Frameworks. The end of Dennard scaling has made all systemsenergy-constrained. For data-intensive applications with limitedtemporal locality, the major energy bottleneck is data movementbetween processor chips and main memory modules. For such workloads, the best way to optimize energy is to place processing near the datain main memory. Advances in 3D integrationprovide an opportunity to implement near-data processing (NDP) withoutthe technology problems that similar efforts had in the past. This paper develops the hardware and software of an NDP architecturefor in-memory analytics frameworks, including MapReduce, graphprocessing, and deep neural networks. We develop simple but scalablehardware support for coherence, communication, and synchronization, anda runtime system that is sufficient to support analytics frameworks withcomplex data patterns while hiding all thedetails of the NDP hardware. Our NDP architecture provides up to 16x performance and energy advantageover conventional approaches, and 2.5x over recently-proposed NDP systems. We also investigate the balance between processing and memory throughput, as well as the scalability and physical and logical organization of the memory system. Finally, we show that it is critical to optimize software frameworksfor spatial locality as it leads to 2.9x efficiency improvements for NDP.
A 12 bit 2.9 GS/s DAC With IM3 $ ≪ -$ 60 dBc Beyond 1 GHz in 65 nm CMOS A 12 bit 2.9 GS/s current-steering DAC implemented in 65 nm CMOS is presented, with an IM3 < ¿-60 dBc beyond 1 GHz while driving a 50 ¿ load with an output swing of 2.5 Vppd and dissipating a power of 188 mW. The SFDR measured at 2.9  GS/s is better than 60 dB beyond 340 MHz while the SFDR measured at 1.6 GS/s is better than 60 dB beyond 440 MHz. The increase in performance at high-frequencies, co...
On The Advantages of Tagged Architecture This paper proposes that all data elements in a computer memory be made to be self-identifying by means of a tag. The paper shows that the advantages of the change from the traditional von Neumann machine to tagged architecture are seen in all software areas including programming systems, operating systems, debugging systems, and systems of software instrumentation. It discusses the advantages that accrue to the hardware designer in the implementation and gives examples for large- and small-scale systems. The economic costs of such an implementation for a minicomputer system are examined. The paper concludes that such a machine architecture may well be a suitable replacement for the traditional von Neumann architecture.
A dynamic analysis of the Dickson charge pump circuit Dynamics of the Dickson charge pump circuit are analyzed. The analytical results enable the estimation of the rise time of the output voltage and that of the power consumption during boosting. By using this analysis, the optimum number of stages to minimize the rise time has been estimated as 1.4 N/sub min/, where N/sub min/ is the minimum value of the number of stages necessary for a given parame...
Prediction of the Spectrum of a Digital Delta–Sigma Modulator Followed by a Polynomial Nonlinearity This paper presents a mathematical analysis of the power spectral density of the output of a nonlinear block driven by a digital delta-sigma modulator. The nonlinearity is a memoryless third-order polynomial with real coefficients. The analysis yields expressions that predict the noise floor caused by the nonlinearity when the input is constant.
A 25 dBm Outphasing Power Amplifier With Cross-Bridge Combiners In this paper, we present a 25 dBm Class-D outphasing power amplifier (PA) with cross-bridge combiners. The Class-D PA is designed in a standard 45 nm process while the combiner is implemented on board using lumped elements for flexibilities in testing. Comparing with conventional non-isolated combiners, the elements of the cross-bridge combiner are carefully chosen so that additional resonance network is formed to reduce out-of-phase current, thereby increasing backoff efficiency of the outphasing PA. The Class-D outphasing PA with the proposed combiner is manufactured and measured at both 900 MHz and 2.4 GHz. It achieves 55% peak power-added efficiency (PAE) at 900 MHz and 45% at 2.4 GHz for a single tone input. For a 10 MHz LTE signal with 6 dB PAR, the PAE is 32% at 900 MHz with −39 dBc adjacent channel power ratio (ACPR) and 22% at 2.4 GHz with −33 dBc ACPR. With digital predistortion (DPD), the linearity of the PA at 2.4 GHz is improved further to reach −53 dBc, −50 dBc, −42 dBc ACPR for 10 MHz, 20 MHz, and 2-carrier 20 MHz LTE signals.
A Hybrid 1<sup>st</sup>/2<sup>nd</sup>-Order VCO-Based CTDSM With Rail-to-Rail Artifact Tolerance for Bidirectional Neural Interface Bi-directional brain machine interfaces enable simultaneous brain activity monitoring and neural modulation. However stimulation artifact can saturate instrumentation front-end, while concurrent on-site recording is needed. This brief presents a voltage controlled-oscillator (VCO) based continuous-time <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\rm \Delta \Sigma $ </tex-math></inline-formula> modulator (CTDSM) with rail-to-rail input range and fast artifact tracking. A hybrid <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}/2^{nd}$ </tex-math></inline-formula> -order loop is designed to achieve high dynamic range (DR) and large input range. Stimulation artifact is detected by a phase counter and compensated by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}$ </tex-math></inline-formula> -order loop. The residue signal is digitized by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2^{nd}$ </tex-math></inline-formula> -order loop for high precision. Redundancy between the two loops is implemented as feedback capacitors elements with non-binary ratio to guarantee feedback stability and linearity. Fabricated in a 55-nm CMOS process, the prototype achieves 65.7dB SNDR across 10 kHz bandwidth with a full scale of 200 mV <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">pp</sub> . And a ±1.2 V input range is achieved to suppress artifacts. Saline based experiment with simultaneous stimulation and recording demonstrates that the implemented system can track and tolerate rail-to-rail stimulation artifact within 30 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{s}$ </tex-math></inline-formula> while small neural signal can be continuously monitored.
1.05
0.05
0.05
0.05
0.05
0.05
0.03
0.007895
0
0
0
0
0
0
Combined Application of Approximate Computing Techniques in DNN Hardware Accelerators This paper applies Approximate Computing (AC) techniques to the main elements which form a DNN hardware accelerator, namely, computation, communication, and memory subsystems. Specifically, approximate multipliers for computation, link voltage swing reduction for communication, voltage over-scaling for the internal SRAM memory, and lossy compression of the external DRAM memory are considered. The different AC techniques are applied in isolation as well as in conjunction with each other. A set of representative CNN models are mapped onto the approximated hardware accelerators and the trade-offs performance vs. energy vs. accuracy are derived for the execution of CNN inferences.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Research on OFDM Technology in 4G. As 3G (third generation mobile communication systems) is at the stage of increasingly large-scale application in China, multimedia communication services is one of the most prominent features in 3G, the technology research of the next generation mobile communication systems (Beyond 3G-after 3G) or (4G-four-generation mobile communication systems) has long been expanded. This paper is mainly researched by how to use OFDM technology as the core technology to effectively improve the transmission rate, increase the capacity of system, and avoid various interferences caused by high speed in the 4G mobile communication system.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Asynchronous Adaptive Threshold Level Crossing ADC for Wearable ECG Sensors. The level crossing ADC generates digitized samples consisting of the magnitude of input signal and time interval between two consecutive level crossings when the input signal crosses the threshold level. This paper presents a new architecture of low power asynchronous adaptive threshold level crossing (LC) ADC suitable for wearable ECG sensors based on a novel algorithm for determining adaptive threshold. The adaptive threshold was determined by calculating the mean of maximum and minimum values of signal in a predetermined window. Polynomial interpolation was used to reconstruct the signal. A signal to noise distortion ratio of 57.50 dB and a mean square error (MSE) measure of 1.368*10 V was achieved by the proposed algorithm for a 1 mV, 10 Hz input sinusoidal signal in MATLAB. The asynchronous adaptive threshold LC ADC operating from a supply voltage of 0.8 V occupied a layout area of 266.33*331.385 μm when implemented in CADENCE virtuoso using 180 nm technology. The designed circuit consumes an average power of 367.6 nW for a 1mVpp, 10 Hz input sinusoidal signal when simulated in Virtuoso.
A wireless wearable ECG sensor for long-term applications. Ubiquitous vital signs sensing using wireless medical sensors are promising alternatives to conventional, in-hospital healthcare systems. In this work, a wearable ECG sensor is proposed. This sensor system combined an appropriate wireless protocol for data communication with capacitive ECG signal sensing and processing. The ANT protocol was used as a low-data-rate wireless module to reduce the pow...
Low-Power High-Input-Impedance EEG Signal Acquisition SoC with Fully Integrated IA and Signal-Specific ADC for Wearable Applications. This paper presents a low-power high-input-impedance analog-front-end (AFE) design including an instrumentational-amplifier (IA) and a neural-signal-specific ADC (NSS-ADC) for continuous acquisition of electroencephalography (EEG) signals. In the proposed AFE, low-voltage low-power design techniques are used to reduce the power consumption of the whole system. Furthermore, by utilizing the propose...
A 9-bit, 14 μW and 0.06 mm 2 Pulse Position Modulation ADC in 90 nm Digital CMOS. This work presents a compact, low-power, time-based architecture for nanometer-scale CMOS analog-to-digital conversion. A pulse position modulation ADC architecture is proposed and a prototype 9 bit PPM ADC incorporating a two-step TDC scheme is presented as proof of concept. The 0.06 mm2 prototype is implemented in 90 nm CMOS and achieves 7.9 effective bits across the entire input bandwidth and d...
From Seizure Detection to Smart and Fully Embedded Seizure Prediction Engine: A Review Recent review papers have investigated seizure prediction, creating the possibility of preempting epileptic seizures. Correct seizure prediction can significantly improve the standard of living for the majority of epileptic patients, as the unpredictability of seizures is a major concern for them. Today, the development of algorithms, particularly in the field of machine learning, enables reliable and accurate seizure prediction using desktop computers. However, despite extensive research effort being devoted to developing seizure detection integrated circuits (ICs), dedicated seizure prediction ICs have not been developed yet. We believe that interdisciplinary study of system architecture, analog and digital ICs, and machine learning algorithms can promote the translation of scientific theory to a more realistic intelligent, integrated, and low-power system that can truly improve the standard of living for epileptic patients. This review explores topics ranging from signal acquisition analog circuits to classification algorithms and dedicated digital signal processing circuits for detection and prediction purposes, to provide a comprehensive and useful guideline for the construction, implementation and optimization of wearable and integrated smart seizure prediction systems.
A 13.34μW Event-driven Patient-specific ANN Cardiac Arrhythmia Classifier for Wearable ECG Sensors. Artificial neural network (ANN) and its variants are favored algorithm in designing cardiac arrhythmia classifier (CAC) for its high accuracy. However, the implementation of ultralow power ANN-CAC is challenging due to the intensive computations. Moreover, the imbalanced MIT-BIH database limits the ANN-CAC performance. Several novel techniques are proposed to address the challenges in the low power implementation. Firstly, continuous-in-time discrete-in-amplitude (CTDA) signal flow is adopted to reduce the multiplication operations. Secondly, conditional grouping scheme (CGS) in combination with biased training (BT) is proposed to handle the imbalanced training samples for better training convergency and evaluation accuracy. Thirdly, arithmetic unit sharing with customized high-performance multiplier improves the power efficiency. Verified in FPGA and synthesized in 0.18 μm CMOS process, the proposed CTDA ANN-CAC can classify an arrhythmia within 252 μs at 25 MHz clock frequency with average power of 13.34 μW for 75bpm heart rate. Evaluated on MIT-BIH database, it shows over 98% classification accuracy, 97% sensitivity, and 94% positive predictivity.
A Bayesian Method for the Induction of Probabilistic Networks from Data This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction of probabilistic expert systems. We extend the basic method to handle missing data and hidden (latent) variables. We show how to perform probabilistic inference by averaging over the inferences of multiple belief networks. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. Finally, we relate the methods in this paper to previous work, and we discuss open problems.
Distributed estimation and quantization An algorithm is developed for the design of a nonlinear, n-sensor, distributed estimation system subject to communication and computation constraints. The algorithm uses only bivariate probability distributions and yields locally optimal estimators that satisfy the required system constraints. It is shown that the algorithm is a generalization of the classical Lloyd-Max results
Distributed average consensus with least-mean-square deviation We consider a stochastic model for distributed average consensus, which arises in applications such as load balancing for parallel processors, distributed coordination of mobile autonomous agents, and network synchronization. In this model, each node updates its local variable with a weighted average of its neighbors' values, and each new value is corrupted by an additive noise with zero mean. The quality of consensus can be measured by the total mean-square deviation of the individual variables from their average, which converges to a steady-state value. We consider the problem of finding the (symmetric) edge weights that result in the least mean-square deviation in steady state. We show that this problem can be cast as a convex optimization problem, so the global solution can be found efficiently. We describe some computational methods for solving this problem, and compare the weights and the mean-square deviations obtained by this method and several other weight design methods.
An area-efficient multistage 3.0- to 8.5-GHz CMOS UWB LNA using tunable active inductors An area-efficient multistage 3.0- to 8.5-GHz ultra-wideband low-noise amplifier (LNA) utilizing tunable active inductors (AIs) is presented. The AI includes a negative impedance circuit (NIC) consisting of a pair of cross-coupled NMOS transistors and is tuned to vary the gain and bandwidth (BW) of the amplifier. Fabricated in a 90-nm digital CMOS process, the proposed fully on-chip LNA occupies a core chip area of only 0.022 mm2. The measurement results show a power gain S21 of 16.0 dB, a noise figure of 3.1-4.4 dB, and an input return loss S11 of less than -10.5 dB over the 3-dB BW of 3.0-8.5 GHz. Tuning the AIs allows one to increase the gain above 18.0 dB and to extend the BW over 9.4 GHz. The LNA consumes 16.0 mW from a power supply of 1.2 V.
Joint mismatch and channel compensation for high-speed OFDM receivers with time-interleaved ADCs Analog-to-digital converters (ADCs) with high sampling rates and output resolution are required for the design of mostly digital transceivers in emerging multi-Gigabit communication systems. A promising approach is to use a time-interleaved (TI) architecture with slower sub-ADCs in parallel, but mismatch among the sub-ADCs, if left uncompensated, can cause error floors in receiver performance. Conventional mismatch compensation schemes typically have complexity (in terms of number of multiplications) that increases with the desired resolution at the output of the TI-ADC. In this paper, we investigate an alternative approach, in which mismatch and channel dispersion are compensated jointly, with the performance metric being overall link reliability rather than ADC performance. For an OFDM system, we characterize the structure of mismatch-induced interference, and demonstrate the efficacy of a frequency-domain interference suppression scheme whose complexity is independent of constellation size (which determines the desired resolution). Numerical results from computer simulation and from experiments on a hardware prototype show that the performance with the proposed joint mismatch and channel compensation technique is close to that without mismatch. While the proposed technique works with offline estimates of mismatch parameters, we provide an iterative, online method for joint estimation of mismatch and channel parameters which leverages the training overhead already available in communication signals.
A Dht-Based Discovery Service For The Internet Of Things Current trends towards the Future Internet are envisaging the conception of novel services endowed with context-aware and autonomic capabilities to improve end users' quality of life. The Internet of Things paradigm is expected to contribute towards this ambitious vision by proposing models and mechanisms enabling the creation of networks of "smart things" on a large scale. It is widely recognized that efficient mechanisms for discovering available resources and capabilities are required to realize such vision. The contribution of this work consists in a novel discovery service for the Internet of Things. The proposed solution adopts a peer-to-peer approach for guaranteeing scalability, robustness, and easy maintenance of the overall system. While most existing peer-to-peer discovery services proposed for the IoT support solely exact match queries on a single attribute (i. e., the object identifier), our solution can handle multiattribute and range queries. We defined a layered approach by distinguishing three main aspects: multiattribute indexing, range query support, peer-to-peer routing. We chose to adopt an over-DHT indexing scheme to guarantee ease of design and implementation principles. We report on the implementation of a Proof of Concept in a dangerous goods monitoring scenario, and, finally, we discuss test results for structural properties and query performance evaluation.
A 15.5 dB, wide signal swing, dynamic amplifier using a common-mode voltage detection technique This paper presents a high-speed, low-power and wide signal swing differential dynamic amplifier using a common-mode voltage detection technique. The proposed dynamic amplifier achieves a 15.5 dB gain with less than 1 dB drop over a signal swing of 1.3 Vpp at an operating frequency of 1.5 GHz with a VDD of 1.2 V in 90 nm CMOS. The power consumption of the proposed circuit can be reduced linearly with operating frequency lowering.
A Hybrid 1<sup>st</sup>/2<sup>nd</sup>-Order VCO-Based CTDSM With Rail-to-Rail Artifact Tolerance for Bidirectional Neural Interface Bi-directional brain machine interfaces enable simultaneous brain activity monitoring and neural modulation. However stimulation artifact can saturate instrumentation front-end, while concurrent on-site recording is needed. This brief presents a voltage controlled-oscillator (VCO) based continuous-time <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\rm \Delta \Sigma $ </tex-math></inline-formula> modulator (CTDSM) with rail-to-rail input range and fast artifact tracking. A hybrid <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}/2^{nd}$ </tex-math></inline-formula> -order loop is designed to achieve high dynamic range (DR) and large input range. Stimulation artifact is detected by a phase counter and compensated by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}$ </tex-math></inline-formula> -order loop. The residue signal is digitized by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2^{nd}$ </tex-math></inline-formula> -order loop for high precision. Redundancy between the two loops is implemented as feedback capacitors elements with non-binary ratio to guarantee feedback stability and linearity. Fabricated in a 55-nm CMOS process, the prototype achieves 65.7dB SNDR across 10 kHz bandwidth with a full scale of 200 mV <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">pp</sub> . And a ±1.2 V input range is achieved to suppress artifacts. Saline based experiment with simultaneous stimulation and recording demonstrates that the implemented system can track and tolerate rail-to-rail stimulation artifact within 30 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{s}$ </tex-math></inline-formula> while small neural signal can be continuously monitored.
1.1
0.1
0.1
0.05
0.05
0.02
0
0
0
0
0
0
0
0
Secure random number generation in wireless sensor networks Reliable random number generation is crucial for many available security algorithms, and some of the methods presented in literature proposed to generate them based on measurements collected from the physical environment, in order to ensure true randomness. However the effectiveness of such methods can be compromised if an attacker is able to gain access to the measurements thus inferring the generated random number. In our paper, we present an algorithm that guarantees security for the generation process, in a real world scenario using wireless sensor nodes as the sources of the physical measurements. The proposed method uses distributed leader election for selecting a random source of data. We prove the robustness of the algorithm by discussing common security attacks, and we present theoretical and experimental evaluation regarding its complexity in terms of time and exchanged messages.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
A 0.5 V 1.1 MS/sec 6.3 fJ/Conversion-Step SAR-ADC With Tri-Level Comparator in 40 nm CMOS This paper presents an extremely low-voltage operation and power efficient successive-approximation-register (SAR) analog-to-digital converter (ADC). Tri-level comparator is proposed to relax the speed requirement of the comparator and decrease the resolution of internal Digital-to-Analog Converter (DAC) by 1-bit. The internal charge redistribution DAC employs unit capacitance of 0.5 fF and ADC operates at nearly thermal noise limitation. To deal with the problem of capacitor mismatch, reconfigurable capacitor array and calibration procedure were developed. The prototype ADC fabricated using 40 nm CMOS process achieves 46.8 dB SNDR and 58.2 dB SFDR with 1.1 MS/sec at 0.5 V power supply. The FoM is 6.3-fJ/conversion step and the chip die area is only 160 μm × 70 μm.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Efficient layered video delivery over multicarrier systems using optimized embedded modulation We tackle the problem of efficient image transmission over multi-carrier modulation (MCM) systems, proposing the use of a layered or multiresolution (MR) framework. In this work, we treat the source as being characterized by multiple layers of importance, and therefore deserving of multiple levels of noise immunity, i.e. having different BER requirements. We present the idea of embedded multi-carrier modulation (EMCM) as a very effective way of achieving this, and introduce a fast table-lookup based power allocation algorithm that optimizes the multicarrier constellation design in terms of maximizing the deliverable throughput bitrates for the different resolution layers, subject to a total power constraint. Simulation results of our EMCM system reveal substantial gains (up to about 25%) in deliverable bit rates over optimized TDM-based MCM designs. Further, in typical image transmission simulations using an embedded wavelet image coder, the EMCM approach yields almost 3 dB gains in delivered quality over conventional single-resolution MCM systems.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A 73.9%-Efficiency CMOS Rectifier Using a Lower DC Feeding (LDCF) Self-Body-Biasing Technique for Far-Field RF Energy-Harvesting Systems. A self-body-biasing technique is proposed for differential-drive cross-coupled (DDCC) rectifier, with its profound application in far-field RF energy-harvesting systems. The conventional source-to-body, and the proposed technique known as Lower DC Feeding (LDCF), were fabricated in the 130-nm CMOS and compared at the operation frequency of 500 MHz, 953 MHz and 2 GHz along with a corresponding load...
A meter-range UWB transceiver chipset for around-the-head audio streaming Any around-the-body wireless system faces challenging requirements. This is especially true in the case of audio streaming around the head e.g. for wireless audio headsets or hearing-aid devices. The behind-the-ear device typically serves multiple radio links e.g. ear-to-ear, ear-to-pocket (a phone or MP3 player) or even a link between the ear and a remote base station such as a TV. Good audio quality is a prerequisite and mW-range power consumption is compulsory in view of battery size. However, the GHz communication channel typically shows a significant attenuation; for an ear-to-ear link, the attenuation due to the narrowband fade dominates and is in the order of 55 to 65dB [1]. The typically small antennas, close to the human body, add another 10 to 15dB of losses. For the ear-to-pocket and the ear-to-remote link, the losses due to body proximity and antenna size reduce, however the distance increases resulting in a similar link budget requirement of 80dB.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
An Ultra Low Power, RF Energy Harvesting Transceiver for Multiple Node Sensor Application An ultra low power, wirelessly-powered RF transceiver for wireless sensor network is implemented using 180 nm CMOS technology. We propose a 98 μW, 457.5 MHz transmitter with output radiation power of -22 dBm. This transmitter utilizes 915 MHz wirelessly powering RF signal by frequency division using a true-single-phase-clock (TSPC) divider to generate the carrier frequency with very low power consumption and small die area. The transmitter can support up to 5 Mbps data rate. The telemetry system uses an 8-stage Cockcroft-Walton rectifier to convert RF to DC voltage for energy harvesting. The bandgap reference and linear regulators provide stable DC voltage throughout the system. The receiver recovers data from the modulated wireless powering RF signal to perform time division multiple access (TMDA) for the multiple node system. Power consumption of the TDMA receiver is less than 15 μW. Our proposed transmitter and receiver each occupies 0.0018 mm2 and 0.0135 mm2 of active die area, respectively.
An Ultralow-Power Wake-Up Receiver Based on Direct Active RF Detection. An ultralow-power direct active RF detection wake-up receiver (WuRx) is presented. In order to reduce the power consumption and system complexity, a differential RF envelope detector is implemented in a complementary current-reuse architecture. The detector sensitivity is enhanced through an embedded matching network with signal passive amplification. A prototype receiver is fabricated in 0.18-μm ...
0.56 V, –20 dBm RF-Powered, Multi-Node Wireless Body Area Network System-on-a-Chip With Harvesting-Efficiency Tracking Loop A battery-less, multi-node wireless body area network (WBAN) system-on-a-chip (SoC) is demonstrated. An efficiency tracking loop is proposed that adjusts the rectifier's threshold voltage to maximize the wireless harvesting operation, resulting in a minimum RF sensitivity better than -20 dBm at 904.5 MHz. Each SoC node is injection-locked and time-synchronized with the broadcasted RF basestation power (up to a sensitivity of -33 dBm) using an injection-locked frequency divider (ILFD). Hence, every sensor node is phase-locked with the basestation and all nodes can wirelessly transmit TDMA sensor data concurrently. Designed in a 65 nm-CMOS process, the fabricated sensor SoC contains the energy harvesting rectifier and bandgap, duty-cycled ADC, digital logic, as well as the multi-node wireless clock synchronization and MICS-band transmitter. For a broadcasted basestation power of 20 dBm (30 dBm), experimental measurements verify correct powering, sensor reading, and wireless data transfer for a distance of 3 m (9 m). The entire biomedical system application is verified by reception of room and abdominal temperature monitoring.
High-Efficiency Differential-Drive CMOS Rectifier for UHF RFIDs A high-efficiency CMOS rectifier circuit for UHF RFIDs was developed. The rectifier has a cross-coupled bridge configuration and is driven by a differential RF input. A differential-drive active gate bias mechanism simultaneously enables both low ON-resistance and small reverse leakage of diode-connected MOS transistors, resulting in large power conversion efficiency (PCE), especially under small ...
The GPU Computing Era GPU computing is at a tipping point, becoming more widely used in demanding consumer applications and high-performance computing. This article describes the rapid evolution of GPU architectures—from graphics processors to massively parallel many-core multiprocessors, recent developments in GPU computing architectures, and how the enthusiastic adoption of CPU+GPU coprocessing is accelerating parallel applications.
Searching in an unknown environment: an optimal randomized algorithm for the cow-path problem Searching for a goal is a central and extensively studied problem in computer science. In classical searching problems, the cost of a search function is simply the number of queries made to an oracle that knows the position of the goal. In many robotics problems, as well as in problems from other areas, we want to charge a cost proportional to the distance between queries (e.g., the time required to travel between two query points). With this cost function in mind, the abstract problem known as the w -lane cow-path problem was designed. There are known optimal deterministic algorithms for the cow-path problem; we give the first randomized algorithm in this paper. We show that our algorithm is optimal for two paths ( w =2) and give evidence that it is optimal for larger values of w . Subsequent to the preliminary version of this paper, Kao et al. ( in “Proceedings, 5th ACM–SIAM Symposium on Discrete Algorithm,” pp. 372–381, 1994) have shown that our algorithm is indeed optimal for all w ⩾2. Our randomized algorithm gives expected performance that is almost twice as good as is possible with a deterministic algorithm. For the performance of our algorithm, we also derive the asymptotic growth with respect to w —despite similar complexity results for related problems, it appears that this growth has never been analyzed.
Pinning adaptive synchronization of a general complex dynamical network There are two challenging fundamental questions in pinning control of complex networks: (i) How many nodes should a network with fixed network structure and coupling strength be pinned to reach network synchronization? (ii) How much coupling strength should a network with fixed network structure and pinning nodes be applied to realize network synchronization? To fix these two questions, we propose a general complex dynamical network model and then further investigate its pinning adaptive synchronization. Based on this model, we attain several novel adaptive synchronization criteria which indeed give the positive answers to these two questions. That is, we provide a simply approximate formula for estimating the detailed number of pinning nodes and the magnitude of the coupling strength for a given general complex dynamical network. Here, the coupling-configuration matrix and the inner-coupling matrix are not necessarily symmetric. Moreover, our pinning adaptive controllers are rather simple compared with some traditional controllers. A Barabási–Albert network example is finally given to show the effectiveness of the proposed synchronization criteria.
Wireless communications in the twenty-first century: a perspective Wireless communications are expected to be the dominant mode of access technology in the next century. Besides voice, a new range of services such as multimedia, high-speed data, etc. are being offered for delivery over wireless networks. Mobility will be seamless, realizing the concept of persons being in contact anywhere, at any time. Two developments are likely to have a substantial impact on t...
A decentralized modular control framework for robust control of FES-activated walker-assisted paraplegic walking using terminal sliding mode and fuzzy logic control. A major challenge to developing functional electrical stimulation (FES) systems for paraplegic walking and widespread acceptance of these systems is the design of a robust control strategy that provides satisfactory tracking performance. The systems need to be robust against time-varying properties of neuromusculoskeletal dynamics, day-to-day variations, subject-to-subject variations, external dis...
Understanding the regenerative comparator circuit The regenerative comparator circuit which lies at the heart of A/D conversion, slicer circuits, and memory sensing, is unstable, time-varying, nonlinear, and with multiple equilibria. That does not mean, as this paper shows, that it cannot be understood with simple equivalent circuits that reveal its dynamics completely, and enable it to be designed to specifications on static and dynamic offset and noise. The analysis is applied to the StrongArm latch.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.2
0.2
0.2
0.2
0.2
0.1
0.02
0
0
0
0
0
0
0
Synthesizing information systems knowledge: A typology of literature reviews. •We proposed a typology of nine review types based on seven core dimensions.•The number of reviews in top-ranked IS journals has increased between 1999 and 2013.•Theoretical and narrative reviews are the most prevalent types in top IS journals.•We found inconsistencies in the labels used by authors to qualify IS reviews.•A majority of IS reviews reported only scholars as their target audience.
Software complexity measurement Inappropriate use of software complexity measures can have large, damaging effects by rewarding poor programming practices and demoralizing good programmers. Software complexity measures must be critically evaluated to determine the ways in which they can best be used.
Standards for XML and Web Services Security XML schemas convey the data syntax and semantics for various application domains, such as business-to-business transactions, medical records, and production status reports. However, these schemas seldom address security issues, which can lead to a worst-case scenario of systems and protocols with no security at all. At best, they confine security to transport level mechanisms such as secure sockets layer (SSL). On the other hand, the omission of security provisions from domain schemas opens the way for generic security specifications based on XML document and grammar extensions. These specifications are orthogonal to domain schemas but integrate with them to support a variety of security objectives, such as confidentiality, integrity, and access control. In 2002, several specifications progressed toward providing a comprehensive standards framework for secure XML-based applications. The paper shows some of the most important specifications, the issues they address, and their dependencies.
Architecture and design of adaptive object-models Many object-oriented information systems share an architectural style that emphasizes flexibility and run-time adaptability. Business rules are stored externally to the program such as in a database or XML files instead of in code. The object model that the user cares about is part of the database, and the object model of the code is just an interpreter of the users' object model. We call these systems "Adaptive Object-Models", because the users' object model is interpreted at runtime and can be changed with immediate (but controlled) effects on the system interpreting it. The real power in Adaptive Object-Models is that they have a definition of a domain model and rules for its integrity and can be configured by domain experts external to the execution of the program. This paper describes the Adaptive Object-Model architecture along with its strengths and weaknesses. It illustrates the Adaptive Object-Model architectural style by describing a framework for Medical Observations (following Fowler's Analysis Patterns) that we built.
Concurrent Data Materialization For Xml-Enabled Database With Semantic Metadata For a company with many databases in different data models, it is necessary to consolidate them into one interchangeable data model and present data in more than one data model concurrently to different users or individual users who need to access the data in more than one data model. The benefit is to let the user stick to his/her own data model to access database in another data model. This paper presents a semantic metadata to preserve database constraints for data materialization to support the user's view of database on an ad hoc basis. The semantic metadata can store the captured semantics of a relational or an XML-enabled database into classes. The stored constraints and data can be materialized into a target database upon user request. The user is allowed to perform data materialization many times alternatively. The process can provide a relational as well as an XML view to the users simultaneously. This concurrent data materialization function can be applied into data warehouse to consolidate heterogeneous database into a fact table in a data model of user's choice. Furthermore, a user can obtain either a relational view or an XML view of the same dataset of an XML-enabled database interchangeably.
A Framework for Considering Comprehensibility in Modeling. Comprehensibility in modeling is the ability of stakeholders to understand relevant aspects of the modeling process. In this article, we provide a framework to help guide exploration of the space of comprehensibility challenges. We consider facets organized around key questions: Who is comprehending? Why are they trying to comprehend? Where in the process are they trying to comprehend? How can we help them comprehend? How do we measure their comprehension? With each facet we consider the broad range of options. We discuss why taking a broad view of comprehensibility in modeling is useful in identifying challenges and opportunities for solutions.
Dependency-preserving normalization of relational and XML data Having a database design that avoids redundant information and update anomalies is the main goal of normalization techniques. Ideally, data as well as constraints should be preserved. However, this is not always achievable: while BCNF eliminates all redundancies, it may not preserve constraints, and 3NF, which achieves dependency preservation, may not always eliminate all redundancies. Our first goal is to investigate how much redundancy 3NF tolerates in order to achieve dependency preservation. We apply an information-theoretic measure and show that only prime attributes admit redundant information in 3NF, but their information content may be arbitrarily low. Then we study the possibility of achieving both redundancy elimination and dependency preservation by a hierarchical representation of relational data in XML. We provide a characterization of cases when an XML normal form called XNF guarantees both. Finally, we deal with dependency preservation in XML and show that like in the relational case, normalizing XML documents to achieve non-redundant data can result in losing constraints. By modifying the definition of XNF, we define another normal form for XML documents, X3NF, that generalizes 3NF for the case of XML and achieves dependency preservation.
Differential Power Analysis . Cryptosystem designers frequently assume that secrets willbe manipulated in closed, reliable computing environments. Unfortunately,actual computers and microchips leak information about the operationsthey process. This paper examines specific methods for analyzingpower consumption measurements to find secret keys from tamperresistant devices. We also discuss approaches for building cryptosystemsthat can operate securely in existing hardware that leaks information.Keywords:...
Searching in an unknown environment: an optimal randomized algorithm for the cow-path problem Searching for a goal is a central and extensively studied problem in computer science. In classical searching problems, the cost of a search function is simply the number of queries made to an oracle that knows the position of the goal. In many robotics problems, as well as in problems from other areas, we want to charge a cost proportional to the distance between queries (e.g., the time required to travel between two query points). With this cost function in mind, the abstract problem known as the w -lane cow-path problem was designed. There are known optimal deterministic algorithms for the cow-path problem; we give the first randomized algorithm in this paper. We show that our algorithm is optimal for two paths ( w =2) and give evidence that it is optimal for larger values of w . Subsequent to the preliminary version of this paper, Kao et al. ( in “Proceedings, 5th ACM–SIAM Symposium on Discrete Algorithm,” pp. 372–381, 1994) have shown that our algorithm is indeed optimal for all w ⩾2. Our randomized algorithm gives expected performance that is almost twice as good as is possible with a deterministic algorithm. For the performance of our algorithm, we also derive the asymptotic growth with respect to w —despite similar complexity results for related problems, it appears that this growth has never been analyzed.
Adaptive Synchronization of an Uncertain Complex Dynamical Network This brief paper further investigates the locally and globally adaptive synchronization of an uncertain complex dynamical network. Several network synchronization criteria are deduced. Especially, our hypotheses and designed adaptive controllers for network synchronization are rather simple in form. It is very useful for future practical engineering design. Moreover, numerical simulations are also given to show the effectiveness of our synchronization approaches.
Practical Mitigations for Timing-Based Side-Channel Attacks on Modern x86 Processors This paper studies and evaluates the extent to which automated compiler techniques can defend against timing-based side-channel attacks on modern x86 processors. We study how modern x86 processors can leak timing information through side-channels that relate to control flow and data flow. To eliminate key-dependent control flow and key-dependent timing behavior related to control flow, we propose the use of if-conversion in a compiler backend, and evaluate a proof-of-concept prototype implementation. Furthermore, we demonstrate two ways in which programs that lack key-dependent control flow and key-dependent cache behavior can still leak timing information on modern x86 implementations such as the Intel Core 2 Duo, and propose defense mechanisms against them.
Fully Integrated CMOS Power Amplifier With Efficiency Enhancement at Power Back-Off This paper presents a new approach for power amplifier design using deep submicron CMOS technologies. A transformer based voltage combiner is proposed to combine power generated from several low-voltage CMOS amplifiers. Unlike other voltage combining transformers, the architecture presented in this paper provides greater flexibility to access and control the individual amplifiers in a voltage comb...
Understanding the regenerative comparator circuit The regenerative comparator circuit which lies at the heart of A/D conversion, slicer circuits, and memory sensing, is unstable, time-varying, nonlinear, and with multiple equilibria. That does not mean, as this paper shows, that it cannot be understood with simple equivalent circuits that reveal its dynamics completely, and enable it to be designed to specifications on static and dynamic offset and noise. The analysis is applied to the StrongArm latch.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.2
0.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
Body-Area Powering With Human Body-Coupled Power Transmission and Energy Harvesting ICs This paper presents the body-coupled power transmission and ambient energy harvesting ICs. The ICs utilize human body-coupling to deliver power to the entire body, and at the same time, harvest energy from ambient EM waves coupled through the body. The ICs improve the recovered power level by adapting to the varying skin-electrode interface parasitic impedance at both the TX and RX. To maximize the power output from the TX, the dynamic impedance matching is performed amidst environment-induced variations. At the RX, the Detuned Impedance Booster (DIB) and the Bulk Adaptation Rectifier (BAR) are proposed to improve the power recovery and extend the power coverage further. In order to ensure the maximum power extraction despite the loading variations, the Dual-Mode Buck-Boost Converter (DM-BBC) is proposed. The ICs fabricated in 40 nm 1P8M CMOS recover up to 100 μW from the body-coupled power transmission and 2.5 μW from the ambient body-coupled energy harvesting. The ICs achieve the full-body area power delivery, with the power harvested from the ambiance via the body-coupling mechanism independent of placements on the body. Both approaches show power sustainability for wearable electronics all around the human body.
Characterization and Modeling of the Capacitive HBC Channel The increasing interest in wireless body area networks has created the need for alternative communication schemes. One example of such schemes is the use of the human body as a communication medium. This technology is called human body communication (HBC), and it offers advantages over the most common radiation-based methods, which makes it an interesting alternative to implement body area networks. The aim of this paper is to identify the influence of a fixture on the HBC channel characterization, and an extended model that includes the test fixtures to explain the measured channel response is proposed. The model was tested against the channel measurement results, and a good experiment-model correlation was obtained. The results show that the test fixture has a nonnegligible influence and that an extended model, based on the physical meaning of the phenomena involved, helps to explain the channel frequency profile results and behavior.
Cavity Resonator Wireless Power Transfer System for Freely Moving Animal Experiments. Objective:The goal of this paper is to create a large wireless powering arena for powering small devices implanted in freely behaving rodents. Methods: We design a cavity resonator based wireless power transfer (WPT) system and utilize our previously developed optimal impedance matching methodology to achieve effective WPT performance for operating sophisticated implantable devices, made with mini...
Position and Orientation Insensitive Wireless Power Transmission for EnerCage-Homecage System. We have developed a new headstage architecture as part of a smart experimental arena, known as the EnerCage-HC2 system, which automatically delivers stimulation and collects behavioral data over extended periods with minimal small animal subject handling or personnel intervention in a standard rodent homecage. Equipped with a four-coil inductive link, the EnerCage-HC2 system wirelessly powers the ...
A Wireless Optogenetic Headstage with Multichannel Electrophysiological Recording Capability We present a small and lightweight fully wireless optogenetic headstage capable of optical neural stimulation and electrophysiological recording. The headstage is suitable for conducting experiments with small transgenic rodents, and features two implantable fiber-coupled light-emitting diode (LED) and two electrophysiological recording channels. This system is powered by a small lithium-ion battery and is entirely built using low-cost commercial off-the-shelf components for better flexibility, reduced development time and lower cost. Light stimulation uses customizable stimulation patterns of varying frequency and duty cycle. The optical power that is sourced from the LED is delivered to target light-sensitive neurons using implantable optical fibers, which provide a measured optical power density of 70 mW/mm(2) at the tip. The headstage is using a novel foldable rigid-flex printed circuit board design, which results into a lightweight and compact device. Recording experiments performed in the cerebral cortex of transgenic ChR2 mice under anesthetized conditions show that the proposed headstage can trigger neuronal activity using optical stimulation, while recording microvolt amplitude electrophysiological signals.
A Trimodal Wireless Implantable Neural Interface System-on-Chip A wireless and battery-less trimodal neural interface system-on-chip (SoC), capable of 16-ch neural recording, 8-ch electrical stimulation, and 16-ch optical stimulation, all integrated on a 5 × 3 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> chip fabricated in 0.35-μm standard CMOS process. The trimodal SoC is designed to be inductively powered and communicated. The downlink data telemetry utilizes on-off keying pulse-position modulation (OOK-PPM) of the power carrier to deliver configuration and control commands at 50 kbps. The analog front-end (AFE) provides adjustable mid-band gain of 55-70 dB, low/high cut-off frequencies of 1-100 Hz/10 kHz, and input-referred noise of 3.46 μV <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">rms</sub> within 1 Hz-50 kHz band. AFE outputs of every two-channel are digitized by a 50 kS/s 10-bit SAR-ADC, and multiplexed together to form a 6.78 Mbps data stream to be sent out by OOK modulating a 434 MHz RF carrier through a power amplifier (PA) and 6 cm monopole antenna, which form the uplink data telemetry. Optical stimulation has a switched-capacitor based stimulation (SCS) architecture, which can sequentially charge four storage capacitor banks up to 4 V and discharge them in selected μLEDs at instantaneous current levels of up to 24.8 mA on demand. Electrical stimulation is supported by four independently driven stimulating sites at 5-bit controllable current levels in ±(25-775) μA range, while active/passive charge balancing circuits ensure safety. In vivo testing was conducted on four anesthetized rats to verify the functionality of the trimodal SoC.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
Scratchpad memory: design alternative for cache on-chip memory in embedded systems In this paper we address the problem of on-chip memory selection for computationally intensive applications, by proposing scratch pad memory as an alternative to cache. Area and energy for different scratch pad and cache sizes are computed using the CACTI tool while performance was evaluated using the trace results of the simulator. The target processor chosen for evaluation was AT91M40400. The results clearly establish scratehpad memory as a low power alternative in most situations with an average energy reducation of 40%. Further the average area-time reduction for the seratchpad memory was 46% of the cache memory.
Approximate counting, uniform generation and rapidly mixing Markov chains The paper studies effective approximate solutions to combinatorial counting and unform generation problems. Using a technique based on the simulation of ergodic Markov chains, it is shown that, for self-reducible structures, almost uniform generation is possible in polynomial time provided only that randomised approximate counting to within some arbitrary polynomial factor is possible in polynomial time. It follows that, for self-reducible structures, polynomial time randomised algorithms for counting to within factors of the form (1 + n − β ) are available either for all β ϵ R or for no β ϵ R . A substantial part of the paper is devoted to investigating the rate of convergence of finite ergodic Markov chains, and a simple but powerful characterisation of rapid convergence for a broad class of chains based on a structural property of the underlying graph is established. Finally, the general techniques of the paper are used to derive an almost uniform generation procedure for labelled graphs with a given degree sequence which is valid over a much wider range of degrees than previous methods: this in turn leads to randomised approximate counting algorithms for these graphs with very good asymptotic behaviour.
A theory of nonsubtractive dither A detailed mathematical investigation of multibit quantizing systems using nonsubtractive dither is presented. It is shown that by the use of dither having a suitably chosen probability density function, moments of the total error can be made independent of the system input signal but that statistical independence of the error and the input signals is not achievable. Similarly, it is demonstrated that values of the total error signal cannot generally be rendered statistically independent of one another but that their joint moments can be controlled and that, in particular, the error sequence can be rendered spectrally white. The properties of some practical dither signals are explored, and recommendations are made for dithering in audio, video, and measurement applications. The paper collects all of the important results on the subject of nonsubtractive dithering and introduces important new ones with the goal of alleviating persistent and widespread misunderstandings regarding the technique
DySER: Unifying Functionality and Parallelism Specialization for Energy-Efficient Computing The DySER (Dynamically Specializing Execution Resources) architecture supports both functionality specialization and parallelism specialization. By dynamically specializing frequently executing regions and applying parallelism mechanisms, DySER provides efficient functionality and parallelism specialization. It outperforms an out-of-order CPU, Streaming SIMD Extensions (SSE) acceleration, and GPU acceleration while consuming less energy. The full-system field-programmable gate array (FPGA) prototype of DySER integrated into OpenSparc demonstrates a practical implementation.
A 93% efficiency reconfigurable switched-capacitor DC-DC converter using on-chip ferroelectric capacitors.
Understanding the regenerative comparator circuit The regenerative comparator circuit which lies at the heart of A/D conversion, slicer circuits, and memory sensing, is unstable, time-varying, nonlinear, and with multiple equilibria. That does not mean, as this paper shows, that it cannot be understood with simple equivalent circuits that reveal its dynamics completely, and enable it to be designed to specifications on static and dynamic offset and noise. The analysis is applied to the StrongArm latch.
A VCO-Based Nonuniform Sampling ADC Using a Slope-Dependent Pulse Generator This paper presents a voltage-controlled oscillator (VCO)-based nonuniform sampling analog-to-digital converter (ADC) as an alternative to the level-crossing (LC)-based converters for digitizing biopotential signals. This work aims to provide a good signal-to-noise-and-distortion ratio at a low average sampling rate. In the proposed conversion method, a slope-dependent pulse generation block is used to provide a variable sample rate adjusted according to the input signal's slope. Simulation results show that the introduced method meets a target reconstruction quality with a sampling rate approaching 92 Sps, while on the same MIT-BIH Arrhythmia N 106 ECG benchmark, the classic LC-based approach requires a sampling rate higher than 500 Sps. The benefits of the proposed method are more remarkable when the input signal is very noisy. The proposed ADC achieves a compression ratio close to 4, but with only 5.4% root-mean-square difference when tested using the MIT-BIH Arrhythmia Database.
1.1
0.1
0.1
0.1
0.05
0.025
0
0
0
0
0
0
0
0
Autonomous and distributed construction of locality aware skip graph An increasing number and variety of devices are connected to the Internet. Thus, it is expected that peer-to-peer (P2P) networks, which have no central servers, will solve the server overload issue in the client-server communication model. Among P2P information search schemes, we focus on Skip Graphs, because they exhibit strong churn resilience and have a range search function, which is desirable for advanced networks such as smart meter networks, smart agriculture, smart cities, vehicular ad hoc networks, and online social networks. However, the overlay of a Skip Graph is constructed regardless of the nodes' locations. As a result, the end-to-end delay between communication nodes becomes much longer than the potential minimum. In a conventional method, this problem is solved by using special landmark nodes. However, this approach sacrifices some advantages of Skip Graphs as a P2P system. Thus, we propose a Skip Graph construction method that does not use any special nodes. The simulation results show that the proposed method provides about 35% improvement in transmission distance without additional query hops.
Long-term availability prediction for groups of volunteer resources Volunteer computing uses the free resources in Internet and Intranet environments for large-scale computation and storage. Currently, 70 applications use over 12 PetaFLOPS of computing power from such platforms. However, these platforms are currently limited to embarrassingly parallel applications. In an effort to broaden the set of applications that can leverage volunteer computing, we focus on the problem of predicting if a group of resources will be continuously available for a relatively long time period. Ensuring the collective availability of volunteer resources is challenging due to their inherent volatility and autonomy. Collective availability is important for enabling parallel applications and workflows on volunteer computing platforms. We evaluate our predictive methods using real availability traces gathered from hundreds of thousands of hosts from the SETI@home volunteer computing project. We show our prediction methods can guarantee reliably the availability of collections of volunteer resources. We show that this is particularly useful for service deployments over volunteer computing environments.
Decentralized approach to resource availability prediction using group availability in a P2P desktop grid In a desktop grid model, the job (computational task) is submitted for execution in the resource only when the resource is idle. There is no guarantee that the job which has started to execute in a resource will complete its execution without any disruption from user activity (such as a keyboard stroke or mouse move) if the desktop machines are used for other purposes. This problem becomes more challenging in a Peer-to-Peer (P2P) model for a desktop grid where there is no central server that decides to allocate a job to a particular resource. This paper describes a P2P desktop grid framework that utilizes resource availability prediction, using group availability data. We improve the functionality of the system by submitting the jobs on machines that have a higher probability of being available at a given time. We benchmark our framework and provide an analysis of our results.
SKIP + : A Self-Stabilizing Skip Graph Peer-to-peer systems rely on a scalable overlay network that enables efficient routing between its members. Hypercubic topologies facilitate such operations while each node only needs to connect to a small number of other nodes. In contrast to static communication networks, peer-to-peer networks allow nodes to adapt their neighbor set over time in order to react to join and leave events and failures. This article shows how to maintain such networks in a robust manner. Concretely, we present a distributed and self-stabilizing algorithm that constructs a (slightly extended) skip graph, SKIP+, in polylogarithmic time from any given initial state in which the overlay network is still weakly connected. This is an exponential improvement compared to previously known self-stabilizing algorithms for overlay networks. In addition, our algorithm handles individual joins and leaves locally and efficiently.
Evaluating Connection Resilience For The Overlay Network Kademlia Kademlia is a decentralized overlay network, up to now mainly used for highly scalable file sharing applications. Due to its distributed nature, it is free from single points of failure. Communication can happen over redundant network paths, which makes information distribution with Kademlia resilient against failing nodes and attacks. In this paper, we simulate Kademlia networks with varying parameters and analyze the number of node-disjoint paths. With our results, we show the influence of these parameters on the network connectivity and, therefore, the resilience against failing nodes and communication channels.
Using the complementary nature of node joining and leaving to handle churn problem in P2P networks Churn is a basic and inherent problem in P2P networks. A lot of relevant studies have been carried out, but all lack versatility. In this paper, a general solution is proposed which makes a peer-to-peer (P2P) network need not pay much attention to churn problem by introducing a logic layer named Dechurn, and most of churn could be eliminated in the Dechurn layer. For utilizing the complementary nature of node joining and leaving, a network scheme, named Constellation, for handling churn is designed on the Dechurn layer through which the resources cached in a node for its spouse node who has left network would be succeeded by a node in latent period. The simulation results indicate that the proposed solution is effective and efficient in handling churn and easy to put into practice.
P-Grid: a self-organizing structured P2P system Abstract: this paper was supported in part bythe National Competence Center in Research on MobileInformation and Communication Systems (NCCR-MICS), acenter supported by the Swiss National Science Foundationunder grant number 5005-67322 and by SNSF grant 2100064994,&quot;Peer-to-Peer Information Systems.&quot;messages. From the responses it (randomly) selectscertain peers to which direct network linksare established
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Deep learning Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech. Machine-learning technology powers many aspects of modern society: from web searches to content filtering on social networks to recommendations on e-commerce websites, and it is increasingly present in consumer products such as cameras and smartphones. Machine-learning systems are used to identify objects in images, transcribe speech into text, match news items, posts or products with users' interests, and select relevant results of search. Increasingly, these applications make use of a class of techniques called deep learning. Conventional machine-learning techniques were limited in their ability to process natural data in their raw form. For decades, constructing a pattern-recognition or machine-learning system required careful engineering and considerable domain expertise to design a feature extractor that transformed the raw data (such as the pixel values of an image) into a suitable internal representation or feature vector from which the learning subsystem, often a classifier, could detect or classify patterns in the input. Representation learning is a set of methods that allows a machine to be fed with raw data and to automatically discover the representations needed for detection or classification. Deep-learning methods are representation-learning methods with multiple levels of representation, obtained by composing simple but non-linear modules that each transform the representation at one level (starting with the raw input) into a representation at a higher, slightly more abstract level. With the composition of enough such transformations, very complex functions can be learned. For classification tasks, higher layers of representation amplify aspects of the input that are important for discrimination and suppress irrelevant variations. An image, for example, comes in the form of an array of pixel values, and the learned features in the first layer of representation typically represent the presence or absence of edges at particular orientations and locations in the image. The second layer typically detects motifs by spotting particular arrangements of edges, regardless of small variations in the edge positions. The third layer may assemble motifs into larger combinations that correspond to parts of familiar objects, and subsequent layers would detect objects as combinations of these parts. The key aspect of deep learning is that these layers of features are not designed by human engineers: they are learned from data using a general-purpose learning procedure. Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years. It has turned out to be very good at discovering intricate structures in high-dimensional data and is therefore applicable to many domains of science, business and government. In addition to beating records in image recognition1, 2, 3, 4 and speech recognition5, 6, 7, it has beaten other machine-learning techniques at predicting the activity of potential drug molecules8, analysing particle accelerator data9, 10, reconstructing brain circuits11, and predicting the effects of mutations in non-coding DNA on gene expression and disease12, 13. Perhaps more surprisingly, deep learning has produced extremely promising results for various tasks in natural language understanding14, particularly topic classification, sentiment analysis, question answering15 and language translation16, 17. We think that deep learning will have many more successes in the near future because it requires very little engineering by hand, so it can easily take advantage of increases in the amount of available computation and data. New learning algorithms and architectures that are currently being developed for deep neural networks will only accelerate this progress. The most common form of machine learning, deep or not, is supervised learning. Imagine that we want to build a system that can classify images as containing, say, a house, a car, a person or a pet. We first collect a large data set of images of houses, cars, people and pets, each labelled with its category. During training, the machine is shown an image and produces an output in the form of a vector of scores, one for each category. We want the desired category to have the highest score of all categories, but this is unlikely to happen before training. We compute an objective function that measures the error (or distance) between the output scores and the desired pattern of scores. The machine then modifies its internal adjustable parameters to reduce this error. These adjustable parameters, often called weights, are real numbers that can be seen as 'knobs' that define the input–output function of the machine. In a typical deep-learning system, there may be hundreds of millions of these adjustable weights, and hundreds of millions of labelled examples with which to train the machine. To properly adjust the weight vector, the learning algorithm computes a gradient vector that, for each weight, indicates by what amount the error would increase or decrease if the weight were increased by a tiny amount. The weight vector is then adjusted in the opposite direction to the gradient vector. The objective function, averaged over all the training examples, can be seen as a kind of hilly landscape in the high-dimensional space of weight values. The negative gradient vector indicates the direction of steepest descent in this landscape, taking it closer to a minimum, where the output error is low on average. In practice, most practitioners use a procedure called stochastic gradient descent (SGD). This consists of showing the input vector for a few examples, computing the outputs and the errors, computing the average gradient for those examples, and adjusting the weights accordingly. The process is repeated for many small sets of examples from the training set until the average of the objective function stops decreasing. It is called stochastic because each small set of examples gives a noisy estimate of the average gradient over all examples. This simple procedure usually finds a good set of weights surprisingly quickly when compared with far more elaborate optimization techniques18. After training, the performance of the system is measured on a different set of examples called a test set. This serves to test the generalization ability of the machine — its ability to produce sensible answers on new inputs that it has never seen during training. Many of the current practical applications of machine learning use linear classifiers on top of hand-engineered features. A two-class linear classifier computes a weighted sum of the feature vector components. If the weighted sum is above a threshold, the input is classified as belonging to a particular category. Since the 1960s we have known that linear classifiers can only carve their input space into very simple regions, namely half-spaces separated by a hyperplane19. But problems such as image and speech recognition require the input–output function to be insensitive to irrelevant variations of the input, such as variations in position, orientation or illumination of an object, or variations in the pitch or accent of speech, while being very sensitive to particular minute variations (for example, the difference between a white wolf and a breed of wolf-like white dog called a Samoyed). At the pixel level, images of two Samoyeds in different poses and in different environments may be very different from each other, whereas two images of a Samoyed and a wolf in the same position and on similar backgrounds may be very similar to each other. A linear classifier, or any other 'shallow' classifier operating on raw pixels could not possibly distinguish the latter two, while putting the former two in the same category. This is why shallow classifiers require a good feature extractor that solves the selectivity–invariance dilemma — one that produces representations that are selective to the aspects of the image that are important for discrimination, but that are invariant to irrelevant aspects such as the pose of the animal. To make classifiers more powerful, one can use generic non-linear features, as with kernel methods20, but generic features such as those arising with the Gaussian kernel do not allow the learner to generalize well far from the training examples21. The conventional option is to hand design good feature extractors, which requires a considerable amount of engineering skill and domain expertise. But this can all be avoided if good features can be learned automatically using a general-purpose learning procedure. This is the key advantage of deep learning. A deep-learning architecture is a multilayer stack of simple modules, all (or most) of which are subject to learning, and many of which compute non-linear input–output mappings. Each module in the stack transforms its input to increase both the selectivity and the invariance of the representation. With multiple non-linear layers, say a depth of 5 to 20, a system can implement extremely intricate functions of its inputs that are simultaneously sensitive to minute details — distinguishing Samoyeds from white wolves — and insensitive to large irrelevant variations such as the background, pose, lighting and surrounding objects. From the earliest days of pattern recognition22, 23, the aim of researchers has been to replace hand-engineered features with trainable multilayer networks, but despite its simplicity, the solution was not widely understood until the mid 1980s. As it turns out, multilayer architectures can be trained by simple stochastic gradient descent. As long as the modules are relatively smooth functions of their inputs and of their internal weights, one can compute gradients using the backpropagation procedure. The idea that this could be done, and that it worked, was discovered independently by several different groups during the 1970s and 1980s24, 25, 26, 27. The backpropagation procedure to compute the gradient of an objective function with respect to the weights of a multilayer stack of modules is nothing more than a practical application of the chain rule for derivatives. The key insight is that the derivative (or gradient) of the objective with respect to the input of a module can be computed by working backwards from the gradient with respect to the output of that module (or the input of the subsequent module) (Fig. 1). The backpropagation equation can be applied repeatedly to propagate gradients through all modules, starting from the output at the top (where the network produces its prediction) all the way to the bottom (where the external input is fed). Once these gradients have been computed, it is straightforward to compute the gradients with respect to the weights of each module. Many applications of deep learning use feedforward neural network architectures (Fig. 1), which learn to map a fixed-size input (for example, an image) to a fixed-size output (for example, a probability for each of several categories). To go from one layer to the next, a set of units compute a weighted sum of their inputs from the previous layer and pass the result through a non-linear function. At present, the most popular non-linear function is the rectified linear unit (ReLU), which is simply the half-wave rectifier f(z) = max(z, 0). In past decades, neural nets used smoother non-linearities, such as tanh(z) or 1/(1 + exp(−z)), but the ReLU typically learns much faster in networks with many layers, allowing training of a deep supervised network without unsupervised pre-training28. Units that are not in the input or output layer are conventionally called hidden units. The hidden layers can be seen as distorting the input in a non-linear way so that categories become linearly separable by the last layer (Fig. 1). In the late 1990s, neural nets and backpropagation were largely forsaken by the machine-learning community and ignored by the computer-vision and speech-recognition communities. It was widely thought that learning useful, multistage, feature extractors with little prior knowledge was infeasible. In particular, it was commonly thought that simple gradient descent would get trapped in poor local minima — weight configurations for which no small change would reduce the average error. In practice, poor local minima are rarely a problem with large networks. Regardless of the initial conditions, the system nearly always reaches solutions of very similar quality. Recent theoretical and empirical results strongly suggest that local minima are not a serious issue in general. Instead, the landscape is packed with a combinatorially large number of saddle points where the gradient is zero, and the surface curves up in most dimensions and curves down in the remainder29, 30. The analysis seems to show that saddle points with only a few downward curving directions are present in very large numbers, but almost all of them have very similar values of the objective function. Hence, it does not much matter which of these saddle points the algorithm gets stuck at. Interest in deep feedforward networks was revived around 2006 (refs 31,32,33,34) by a group of researchers brought together by the Canadian Institute for Advanced Research (CIFAR). The researchers introduced unsupervised learning procedures that could create layers of feature detectors without requiring labelled data. The objective in learning each layer of feature detectors was to be able to reconstruct or model the activities of feature detectors (or raw inputs) in the layer below. By 'pre-training' several layers of progressively more complex feature detectors using this reconstruction objective, the weights of a deep network could be initialized to sensible values. A final layer of output units could then be added to the top of the network and the whole deep system could be fine-tuned using standard backpropagation33, 34, 35. This worked remarkably well for recognizing handwritten digits or for detecting pedestrians, especially when the amount of labelled data was very limited36. The first major application of this pre-training approach was in speech recognition, and it was made possible by the advent of fast graphics processing units (GPUs) that were convenient to program37 and allowed researchers to train networks 10 or 20 times faster. In 2009, the approach was used to map short temporal windows of coefficients extracted from a sound wave to a set of probabilities for the various fragments of speech that might be represented by the frame in the centre of the window. It achieved record-breaking results on a standard speech recognition benchmark that used a small vocabulary38 and was quickly developed to give record-breaking results on a large vocabulary task39. By 2012, versions of the deep net from 2009 were being developed by many of the major speech groups6 and were already being deployed in Android phones. For smaller data sets, unsupervised pre-training helps to prevent overfitting40, leading to significantly better generalization when the number of labelled examples is small, or in a transfer setting where we have lots of examples for some 'source' tasks but very few for some 'target' tasks. Once deep learning had been rehabilitated, it turned out that the pre-training stage was only needed for small data sets. There was, however, one particular type of deep, feedforward network that was much easier to train and generalized much better than networks with full connectivity between adjacent layers. This was the convolutional neural network (ConvNet)41, 42. It achieved many practical successes during the period when neural networks were out of favour and it has recently been widely adopted by the computer-vision community. ConvNets are designed to process data that come in the form of multiple arrays, for example a colour image composed of three 2D arrays containing pixel intensities in the three colour channels. Many data modalities are in the form of multiple arrays: 1D for signals and sequences, including language; 2D for images or audio spectrograms; and 3D for video or volumetric images. There are four key ideas behind ConvNets that take advantage of the properties of natural signals: local connections, shared weights, pooling and the use of many layers. The architecture of a typical ConvNet (Fig. 2) is structured as a series of stages. The first few stages are composed of two types of layers: convolutional layers and pooling layers. Units in a convolutional layer are organized in feature maps, within which each unit is connected to local patches in the feature maps of the previous layer through a set of weights called a filter bank. The result of this local weighted sum is then passed through a non-linearity such as a ReLU. All units in a feature map share the same filter bank. Different feature maps in a layer use different filter banks. The reason for this architecture is twofold. First, in array data such as images, local groups of values are often highly correlated, forming distinctive local motifs that are easily detected. Second, the local statistics of images and other signals are invariant to location. In other words, if a motif can appear in one part of the image, it could appear anywhere, hence the idea of units at different locations sharing the same weights and detecting the same pattern in different parts of the array. Mathematically, the filtering operation performed by a feature map is a discrete convolution, hence the name. Although the role of the convolutional layer is to detect local conjunctions of features from the previous layer, the role of the pooling layer is to merge semantically similar features into one. Because the relative positions of the features forming a motif can vary somewhat, reliably detecting the motif can be done by coarse-graining the position of each feature. A typical pooling unit computes the maximum of a local patch of units in one feature map (or in a few feature maps). Neighbouring pooling units take input from patches that are shifted by more than one row or column, thereby reducing the dimension of the representation and creating an invariance to small shifts and distortions. Two or three stages of convolution, non-linearity and pooling are stacked, followed by more convolutional and fully-connected layers. Backpropagating gradients through a ConvNet is as simple as through a regular deep network, allowing all the weights in all the filter banks to be trained. Deep neural networks exploit the property that many natural signals are compositional hierarchies, in which higher-level features are obtained by composing lower-level ones. In images, local combinations of edges form motifs, motifs assemble into parts, and parts form objects. Similar hierarchies exist in speech and text from sounds to phones, phonemes, syllables, words and sentences. The pooling allows representations to vary very little when elements in the previous layer vary in position and appearance. The convolutional and pooling layers in ConvNets are directly inspired by the classic notions of simple cells and complex cells in visual neuroscience43, and the overall architecture is reminiscent of the LGN–V1–V2–V4–IT hierarchy in the visual cortex ventral pathway44. When ConvNet models and monkeys are shown the same picture, the activations of high-level units in the ConvNet explains half of the variance of random sets of 160 neurons in the monkey's inferotemporal cortex45. ConvNets have their roots in the neocognitron46, the architecture of which was somewhat similar, but did not have an end-to-end supervised-learning algorithm such as backpropagation. A primitive 1D ConvNet called a time-delay neural net was used for the recognition of phonemes and simple words47, 48. There have been numerous applications of convolutional networks going back to the early 1990s, starting with time-delay neural networks for speech recognition47 and document reading42. The document reading system used a ConvNet trained jointly with a probabilistic model that implemented language constraints. By the late 1990s this system was reading over 10% of all the cheques in the United States. A number of ConvNet-based optical character recognition and handwriting recognition systems were later deployed by Microsoft49. ConvNets were also experimented with in the early 1990s for object detection in natural images, including faces and hands50, 51, and for face recognition52. Since the early 2000s, ConvNets have been applied with great success to the detection, segmentation and recognition of objects and regions in images. These were all tasks in which labelled data was relatively abundant, such as traffic sign recognition53, the segmentation of biological images54 particularly for connectomics55, and the detection of faces, text, pedestrians and human bodies in natural images36, 50, 51, 56, 57, 58. A major recent practical success of ConvNets is face recognition59. Importantly, images can be labelled at the pixel level, which will have applications in technology, including autonomous mobile robots and self-driving cars60, 61. Companies such as Mobileye and NVIDIA are using such ConvNet-based methods in their upcoming vision systems for cars. Other applications gaining importance involve natural language understanding14 and speech recognition7. Despite these successes, ConvNets were largely forsaken by the mainstream computer-vision and machine-learning communities until the ImageNet competition in 2012. When deep convolutional networks were applied to a data set of about a million images from the web that contained 1,000 different classes, they achieved spectacular results, almost halving the error rates of the best competing approaches1. This success came from the efficient use of GPUs, ReLUs, a new regularization technique called dropout62, and techniques to generate more training examples by deforming the existing ones. This success has brought about a revolution in computer vision; ConvNets are now the dominant approach for almost all recognition and detection tasks4, 58, 59, 63, 64, 65 and approach human performance on some tasks. A recent stunning demonstration combines ConvNets and recurrent net modules for the generation of image captions (Fig. 3). Recent ConvNet architectures have 10 to 20 layers of ReLUs, hundreds of millions of weights, and billions of connections between units. Whereas training such large networks could have taken weeks only two years ago, progress in hardware, software and algorithm parallelization have reduced training times to a few hours. The performance of ConvNet-based vision systems has caused most major technology companies, including Google, Facebook, Microsoft, IBM, Yahoo!, Twitter and Adobe, as well as a quickly growing number of start-ups to initiate research and development projects and to deploy ConvNet-based image understanding products and services. ConvNets are easily amenable to efficient hardware implementations in chips or field-programmable gate arrays66, 67. A number of companies such as NVIDIA, Mobileye, Intel, Qualcomm and Samsung are developing ConvNet chips to enable real-time vision applications in smartphones, cameras, robots and self-driving cars. Deep-learning theory shows that deep nets have two different exponential advantages over classic learning algorithms that do not use distributed representations21. Both of these advantages arise from the power of composition and depend on the underlying data-generating distribution having an appropriate componential structure40. First, learning distributed representations enable generalization to new combinations of the values of learned features beyond those seen during training (for example, 2n combinations are possible with n binary features)68, 69. Second, composing layers of representation in a deep net brings the potential for another exponential advantage70 (exponential in the depth). The hidden layers of a multilayer neural network learn to represent the network's inputs in a way that makes it easy to predict the target outputs. This is nicely demonstrated by training a multilayer neural network to predict the next word in a sequence from a local context of earlier words71. Each word in the context is presented to the network as a one-of-N vector, that is, one component has a value of 1 and the rest are 0. In the first layer, each word creates a different pattern of activations, or word vectors (Fig. 4). In a language model, the other layers of the network learn to convert the input word vectors into an output word vector for the predicted next word, which can be used to predict the probability for any word in the vocabulary to appear as the next word. The network learns word vectors that contain many active components each of which can be interpreted as a separate feature of the word, as was first demonstrated27 in the context of learning distributed representations for symbols. These semantic features were not explicitly present in the input. They were discovered by the learning procedure as a good way of factorizing the structured relationships between the input and output symbols into multiple 'micro-rules'. Learning word vectors turned out to also work very well when the word sequences come from a large corpus of real text and the individual micro-rules are unreliable71. When trained to predict the next word in a news story, for example, the learned word vectors for Tuesday and Wednesday are very similar, as are the word vectors for Sweden and Norway. Such representations are called distributed representations because their elements (the features) are not mutually exclusive and their many configurations correspond to the variations seen in the observed data. These word vectors are composed of learned features that were not determined ahead of time by experts, but automatically discovered by the neural network. Vector representations of words learned from text are now very widely used in natural language applications14, 17, 72, 73, 74, 75, 76. The issue of representation lies at the heart of the debate between the logic-inspired and the neural-network-inspired paradigms for cognition. In the logic-inspired paradigm, an instance of a symbol is something for which the only property is that it is either identical or non-identical to other symbol instances. It has no internal structure that is relevant to its use; and to reason with symbols, they must be bound to the variables in judiciously chosen rules of inference. By contrast, neural networks just use big activity vectors, big weight matrices and scalar non-linearities to perform the type of fast 'intuitive' inference that underpins effortless commonsense reasoning. Before the introduction of neural language models71, the standard approach to statistical modelling of language did not exploit distributed representations: it was based on counting frequencies of occurrences of short symbol sequences of length up to N (called N-grams). The number of possible N-grams is on the order of VN, where V is the vocabulary size, so taking into account a context of more than a handful of words would require very large training corpora. N-grams treat each word as an atomic unit, so they cannot generalize across semantically related sequences of words, whereas neural language models can because they associate each word with a vector of real valued features, and semantically related words end up close to each other in that vector space (Fig. 4). When backpropagation was first introduced, its most exciting use was for training recurrent neural networks (RNNs). For tasks that involve sequential inputs, such as speech and language, it is often better to use RNNs (Fig. 5). RNNs process an input sequence one element at a time, maintaining in their hidden units a 'state vector' that implicitly contains information about the history of all the past elements of the sequence. When we consider the outputs of the hidden units at different discrete time steps as if they were the outputs of different neurons in a deep multilayer network (Fig. 5, right), it becomes clear how we can apply backpropagation to train RNNs. RNNs are very powerful dynamic systems, but training them has proved to be problematic because the backpropagated gradients either grow or shrink at each time step, so over many time steps they typically explode or vanish77, 78. Thanks to advances in their architecture79, 80 and ways of training them81, 82, RNNs have been found to be very good at predicting the next character in the text83 or the next word in a sequence75, but they can also be used for more complex tasks. For example, after reading an English sentence one word at a time, an English 'encoder' network can be trained so that the final state vector of its hidden units is a good representation of the thought expressed by the sentence. This thought vector can then be used as the initial hidden state of (or as extra input to) a jointly trained French 'decoder' network, which outputs a probability distribution for the first word of the French translation. If a particular first word is chosen from this distribution and provided as input to the decoder network it will then output a probability distribution for the second word of the translation and so on until a full stop is chosen17, 72, 76. Overall, this process generates sequences of French words according to a probability distribution that depends on the English sentence. This rather naive way of performing machine translation has quickly become competitive with the state-of-the-art, and this raises serious doubts about whether understanding a sentence requires anything like the internal symbolic expressions that are manipulated by using inference rules. It is more compatible with the view that everyday reasoning involves many simultaneous analogies that each contribute plausibility to a conclusion84, 85. Instead of translating the meaning of a French sentence into an English sentence, one can learn to 'translate' the meaning of an image into an English sentence (Fig. 3). The encoder here is a deep ConvNet that converts the pixels into an activity vector in its last hidden layer. The decoder is an RNN similar to the ones used for machine translation and neural language modelling. There has been a surge of interest in such systems recently (see examples mentioned in ref. 86). RNNs, once unfolded in time (Fig. 5), can be seen as very deep feedforward networks in which all the layers share the same weights. Although their main purpose is to learn long-term dependencies, theoretical and empirical evidence shows that it is difficult to learn to store information for very long78. To correct for that, one idea is to augment the network with an explicit memory. The first proposal of this kind is the long short-term memory (LSTM) networks that use special hidden units, the natural behaviour of which is to remember inputs for a long time79. A special unit called the memory cell acts like an accumulator or a gated leaky neuron: it has a connection to itself at the next time step that has a weight of one, so it copies its own real-valued state and accumulates the external signal, but this self-connection is multiplicatively gated by another unit that learns to decide when to clear the content of the memory. LSTM networks have subsequently proved to be more effective than conventional RNNs, especially when they have several layers for each time step87, enabling an entire speech recognition system that goes all the way from acoustics to the sequence of characters in the transcription. LSTM networks or related forms of gated units are also currently used for the encoder and decoder networks that perform so well at machine translation17, 72, 76. Over the past year, several authors have made different proposals to augment RNNs with a memory module. Proposals include the Neural Turing Machine in which the network is augmented by a 'tape-like' memory that the RNN can choose to read from or write to88, and memory networks, in which a regular network is augmented by a kind of associative memory89. Memory networks have yielded excellent performance on standard question-answering benchmarks. The memory is used to remember the story about which the network is later asked to answer questions. Beyond simple memorization, neural Turing machines and memory networks are being used for tasks that would normally require reasoning and symbol manipulation. Neural Turing machines can be taught 'algorithms'. Among other things, they can learn to output a sorted list of symbols when their input consists of an unsorted sequence in which each symbol is accompanied by a real value that indicates its priority in the list88. Memory networks can be trained to keep track of the state of the world in a setting similar to a text adventure game and after reading a story, they can answer questions that require complex inference90. In one test example, the network is shown a 15-sentence version of the The Lord of the Rings and correctly answers questions such as “where is Frodo now?”89. Unsupervised learning91, 92, 93, 94, 95, 96, 97, 98 had a catalytic effect in reviving interest in deep learning, but has since been overshadowed by the successes of purely supervised learning. Although we have not focused on it in this Review, we expect unsupervised learning to become far more important in the longer term. Human and animal learning is largely unsupervised: we discover the structure of the world by observing it, not by being told the name of every object. Human vision is an active process that sequentially samples the optic array in an intelligent, task-specific way using a small, high-resolution fovea with a large, low-resolution surround. We expect much of the future progress in vision to come from systems that are trained end-to-end and combine ConvNets with RNNs that use reinforcement learning to decide where to look. Systems combining deep learning and reinforcement learning are in their infancy, but they already outperform passive vision systems99 at classification tasks and produce impressive results in learning to play many different video games100. Natural language understanding is another area in which deep learning is poised to make a large impact over the next few years. We expect systems that use RNNs to understand sentences or whole documents will become much better when they learn strategies for selectively attending to one part at a time76, 86. Ultimately, major progress in artificial intelligence will come about through systems that combine representation learning with complex reasoning. Although deep learning and simple reasoning have been used for speech and handwriting recognition for a long time, new paradigms are needed to replace rule-based manipulation of symbolic expressions by operations on large vectors101. Download references The authors would like to thank the Natural Sciences and Engineering Research Council of Canada, the Canadian Institute For Advanced Research (CIFAR), the National Science Foundation and Office of Naval Research for support. Y.L. and Y.B. are CIFAR fellows. Reprints and permissions information is available at www.nature.com/reprints.
Information spreading in stationary Markovian evolving graphs Markovian evolving graphs [2] are dynamic-graph models where the links among a fixed set of nodes change during time according to an arbitrary Markovian rule. They are extremely general and they can well describe important dynamic-network scenarios.
Approximately bisimilar symbolic models for nonlinear control systems Control systems are usually modeled by differential equations describing how physical phenomena can be influenced by certain control parameters or inputs. Although these models are very powerful when dealing with physical phenomena, they are less suited to describe software and hardware interfacing with the physical world. For this reason there is a growing interest in describing control systems through symbolic models that are abstract descriptions of the continuous dynamics, where each ''symbol'' corresponds to an ''aggregate'' of states in the continuous model. Since these symbolic models are of the same nature of the models used in computer science to describe software and hardware, they provide a unified language to study problems of control in which software and hardware interact with the physical world. Furthermore, the use of symbolic models enables one to leverage techniques from supervisory control and algorithms from game theory for controller synthesis purposes. In this paper we show that every incrementally globally asymptotically stable nonlinear control system is approximately equivalent (bisimilar) to a symbolic model. The approximation error is a design parameter in the construction of the symbolic model and can be rendered as small as desired. Furthermore, if the state space of the control system is bounded, the obtained symbolic model is finite. For digital control systems, and under the stronger assumption of incremental input-to-state stability, symbolic models can be constructed through a suitable quantization of the inputs.
A 60-GHz 16QAM/8PSK/QPSK/BPSK Direct-Conversion Transceiver for IEEE802.15.3c. This paper presents a 60-GHz direct-conversion transceiver using 60-GHz quadrature oscillators. The transceiver has been fabricated in a standard 65-nm CMOS process. It in cludes a receiver with a 17.3-dB conversion gain and less than 8.0-dB noise figure, a transmitter with a 18.3-dB conversion gain, a 9.5-dBm output 1 dB compression point, a 10.9-dBm saturation output power and 8.8-% power added ...
20.3 A feedforward controlled on-chip switched-capacitor voltage regulator delivering 10W in 32nm SOI CMOS On-chip (or fully integrated) switched-capacitor (SC) voltage regulators (SCVR) have recently received a lot of attention due to their ease of monolithic integration. The use of deep trench capacitors can lead to SCVR implementations that simultaneously achieve high efficiency, high power density, and fast response time. For the application of granular power distribution of many-core microprocessor systems, the on-chip SCVR must maintain an output voltage above a certain minimum level Uout, min in order for the microprocessor core to meet setup time requirements. Following a transient load change, the output voltage typically exhibits a droop due to parasitic inductances and resistances in the power distribution network. Therefore, the steady-state output voltage is kept high enough to ensure VOUT >Vout, min at all times, thereby introducing an output voltage overhead that leads to increased system power consumption. The output voltage droop can be reduced by implementing fast regulation and a sufficient amount of on-chip decoupling capacitance. However, a large amount of on-chip decoupling capacitance is needed to significantly reduce the droop, and it becomes impractical to implement owing to the large chip area overhead required. This paper presents a feedforward control scheme that significantly reduces the output voltage droop in the presence of a large input voltage droop following a transient event. This in turn reduces the required output voltage overhead and may lead to significant overall system power savings.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.11
0.1
0.1
0.1
0.1
0.06
0.013333
0.000714
0
0
0
0
0
0
ABSynthe: Automatic Blackbox Side-channel Synthesis on Commodity Microarchitectures
TILE64 - Processor: A 64-Core SoC with Mesh Interconnect The TILE64TM processor is a multicore SoC targeting the high-performance demands of a wide range of embedded applications across networking and digital multimedia applications. A figure shows a block diagram with 64 tile processors arranged in an 8x8 array. These tiles connect through a scalable 2D mesh network with high-speed I/Os on the periphery. Each general-purpose processor is identical and capable of running SMP Linux.
Dynamic adaptive virtual core mapping to improve power, energy, and performance in multi-socket multicores Consider a multithreaded parallel application running inside a multicore virtual machine context that is itself hosted on a multi-socket multicore physical machine. How should the VMM map virtual cores to physical cores? We compare a local mapping, which compacts virtual cores to processor sockets, and an interleaved mapping, which spreads them over the sockets. Simply choosing between these two mappings exposes clear tradeoffs between performance, energy, and power. We then describe the design, implementation, and evaluation of a system that automatically and dynamically chooses between the two mappings. The system consists of a set of efficient online VMM-based mechanisms and policies that (a) capture the relevant characteristics of memory reference behavior, (b) provide a policy and mechanism for configuring the mapping of virtual machine cores to physical cores that optimizes for power, energy, or performance, and (c) drive dynamic migrations of virtual cores among local physical cores based on the workload and the currently specified objective. Using these techniques we demonstrate that the performance of SPEC and PARSEC benchmarks can be increased by as much as 66%, energy reduced by as much as 31%, and power reduced by as much as 17%, depending on the optimization objective.
Towards Evaluating the Robustness of Neural Networks Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95% to 0.5%. In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.
FaCT: A Flexible, Constant-Time Programming Language We argue that C is unsuitable for writing timing-channel free cryptographic code that is both fast and readable. Readable implementations of crypto routines would contain highlevel constructs like if statements, constructs that also introduce timing vulnerabilities. To avoid vulnerabilities, programmers must rewrite their code to dodge intuitive yet dangerous constructs, cluttering the code-base and potentially introducing new errors. Moreover, even when programmers are diligent, compiler optimization passes may still introduce branches and other sources of timing side channels. This status quo is the worst of both worlds: tortured source code that can still yield vulnerable machine code. We propose to solve this problem with a domain-specific language that permits programmers to intuitively express crypto routines and reason about secret values, and a compiler that generates efficient, timing-channel free assembly code.
MicroScope: Enabling Microarchitectural Replay Attacks A microarchitectural replay attack is a novel class of attack where an adversary can denoise nearly arbitrary microarchitectural side channels in a single run of the victim. The idea is to cause the victim to repeatedly replay by inducing pipeline flushes. In this article, we design, implement, and demonstrate our ideas in a framework, called MicroScope, that causes repeated pipeline flushes by in...
CleanupSpec: An Undo Approach to Safe Speculation Speculation-based attacks affect hundreds of millions of computers. These attacks typically exploit caches to leak information, using speculative instructions to cause changes to the cache state. Hardware-based solutions that protect against such forms of attacks try to prevent any speculative changes to the cache sub-system by delaying them. For example, InvisiSpec, a recent work, splits the load into two operations: the first operation is speculative and obtains the value and the second operation is non-speculative and changes the state of the cache. Unfortunately, such a "Redo" based approach typically incurs slowdown due to the requirement of extra operations for correctly speculated loads, that form the large majority of loads. In this work, we propose CleanupSpec, an "Undo"-based approach to safe speculation. CleanupSpec is a hardware-based solution that mitigates these attacks by undoing the changes to the cache sub-system caused by speculative instructions, in the event they are squashed on a mis-speculation. As a result, CleanupSpec prevents information leakage on the correct path of execution due to any mis-speculated load and is secure against speculation-based attacks exploiting caches (we demonstrate a proof-of-concept defense on Spectre Variant-1 PoC). Unlike a Redo-based approach which incurs overheads for correct-path loads, CleanupSpec incurs overheads only for the wrong-path loads that are less frequent. As a result, CleanupSpec only incurs an average slowdown of 5.1% compared to a non-secure baseline. Moreover, CleanupSpec incurs a modest storage overhead of less than 1 kilobyte per core, for tracking and undoing the speculative changes to the caches.
Non-monopolizable caches: Low-complexity mitigation of cache side channel attacks We propose a flexibly-partitioned cache design that either drastically weakens or completely eliminates cache-based side channel attacks. The proposed Non-Monopolizable (NoMo) cache dynamically reserves cache lines for active threads and prevents other co-executing threads from evicting reserved lines. Unreserved lines remain available for dynamic sharing among threads. NoMo requires only simple modifications to the cache replacement logic, making it straightforward to adopt. It requires no software support enabling it to automatically protect pre-existing binaries. NoMo results in performance degradation of about 1&percnt; on average. We demonstrate that NoMo can provide strong security guarantees for the AES and Blowfish encryption algorithms.
Deep Residual Learning for Image Recognition Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers - 8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.
The Information Structure of Indulgent Consensus To solve consensus, distributed systems have to be equipped with oracles such as a failure detector, a leader capability, or a random number generator. For each oracle, various consensus algorithms have been devised. Some of these algorithms are indulgent toward their oracle in the sense that they never violate consensus safety, no matter how the underlying oracle behaves. This paper presents a simple and generic indulgent consensus algorithm that can be instantiated with any specific oracle and be as efficient as any ad hoc consensus algorithm initially devised with that oracle in mind. The key to combining genericity and efficiency is to factor out the information structure of indulgent consensus executions within a new distributed abstraction, which we call "Lambda.驴 Interestingly, identifying this information structure also promotes a fine-grained study of the inherent complexity of indulgent consensus. We show that instantiations of our generic algorithm with specific oracles, or combinations of them, match lower bounds on oracle-efficiency, zero-degradation, and one-step-decision. We show, however, that no leader or failure detector-based consensus algorithm can be, at the same time, zero-degrading and configuration-efficient. Moreover, we show that leader-based consensus algorithms that are oracle-efficient are inherently zero-degrading, but some failure detector-based consensus algorithms can be both oracle-efficient and configuration-efficient. These results highlight some of the fundamental trade offs underlying each oracle,
Multi-objective optimization using genetic algorithms: A tutorial Multi-objective formulations are realistic models for many complex engineering optimization problems. In many real-life problems, objectives under consideration conflict with each other, and optimizing a particular solution with respect to a single objective can result in unacceptable results with respect to the other objectives. A reasonable solution to a multi-objective problem is to investigate a set of solutions, each of which satisfies the objectives at an acceptable level without being dominated by any other solution. In this paper, an overview and tutorial is presented describing genetic algorithms (GA) developed specifically for problems with multiple objectives. They differ primarily from traditional GA by using specialized fitness functions and introducing methods to promote solution diversity.
A 2.4GHz sub-harmonically injection-locked PLL with self-calibrated injection timing A low-phase-noise integer-N phase-locked loop (PLL) is attractive in many applications, such as clock generation and analog-to-digital conversion. The sub-harmonically injection-locked technique, sub-sampling technique, and the multiplying delay-locked loop (MDLL) can significantly improve the phase noise of an integer-N PLL. In the sub-harmonically injection-locked technique, to inject a low-frequency reference clock into a high-frequency voltage-controlled oscillator (VCO), the injection timing should be tightly controlled. If the injection timing varies due to process variation, it may cause a large reference spur or even cause the PLL to fails to lock. A sub-harmonically injection-locked PLL (SILPLL) adopts a sub-sampling phase-detector (PD) to automatically align the phase between the injection pulse and a VCO. However, a sub-sampling PD has a small capture range and a low bandwidth. The high-frequency non-linear effects of a sub-sampling PD may degrade the accuracy and limit the maximum speed of a VCO. In addition, a frequency-locked loop is needed for a sub-sampling PD. A delay line is manually adjusted to achieve the correct injection timing. However, the delay line is sensitive to process variations. Thus, the injection timing should be calibrated.
A Minimally Invasive 64-Channel Wireless μECoG Implant Emerging applications in brain-machine interface systems require high-resolution, chronic multisite cortical recordings, which cannot be obtained with existing technologies due to high power consumption, high invasiveness, or inability to transmit data wirelessly. In this paper, we describe a microsystem based on electrocorticography (ECoG) that overcomes these difficulties, enabling chronic recording and wireless transmission of neural signals from the surface of the cerebral cortex. The device is comprised of a highly flexible, high-density, polymer-based 64-channel electrode array and a flexible antenna, bonded to 2.4 mm × 2.4 mm CMOS integrated circuit (IC) that performs 64-channel acquisition, wireless power and data transmission. The IC digitizes the signal from each electrode at 1 kS/s with 1.2 μV input referred noise, and transmits the serialized data using a 1 Mb/s backscattering modulator. A dual-mode power-receiving rectifier reduces data-dependent supply ripple, enabling the integration of small decoupling capacitors on chip and eliminating the need for external components. Design techniques in the wireless and baseband circuits result in over 16× reduction in die area with a simultaneous 3× improvement in power efficiency over the state of the art. The IC consumes 225 μW and can be powered by an external reader transmitting 12 mW at 300 MHz, which is over 3× lower than IEEE and FCC regulations.
An Energy-Efficient SAR ADC With Event-Triggered Error Correction This brief presents an energy-efficient fully differential 10-bit successive approximation register (SAR) analog-to-digital converter (ADC) with a resolution of 10 bits and a sampling rate of 320 kS/s. The optimal capacitor split and bypass number is analyzed to achieve the highest switching energy efficiency. The common-mode voltage level remains constant during the MSB-capacitor switching cycles. To minimize nonlinearity due to charge averaging voltage offset or DAC array mismatch, an event-triggered error correction method is employed as a redundant cycle for detecting digital code errors within 1 least significant bit (LSB). A test chip was fabricated using the 180-nm CMOS process and occupied a 0.0564-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> core area. Under a regular 0.65-V supply voltage, the ADC achieved an effective number of bits of 9.61 bits and a figure of merit (FOM) of 6.38 fJ/conversion step, with 1.6- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${ \mu }\text{W}$ </tex-math></inline-formula> power dissipation for a low-frequency input. The measured differential and integral nonlinearity results are within 0.30 LSB and 0.43 LSB, respectively.
1.1
0.1
0.1
0.1
0.1
0.1
0.033333
0.00625
0
0
0
0
0
0
Event-Triggered Adaptive Fault-Tolerant Control for a Class of Nonlinear Multiagent Systems With Sensor and Actuator Faults This paper investigates the leader-following consensus control problem for a class of nonlinear multiagent systems subject to sensor and actuator faults under a fixed directed graph. First, a fault compensation mechanism is proposed because of multiple faults wherein the adaptive parameters substitute the fault coefficients. Then, the command filtering method is employed to avoid the burst of complexity rendered by the duplicative differentiation of the virtual control signal. Furthermore, the neural networks-based state observers are designed to reconstruct the unmeasurable states of the nonlinear multiagent systems. According to the given design approach, a switching threshold-based event-triggered adaptive fault-tolerant control strategy is developed and ensures all the signals in the closed-loop system are semiglobally uniformly ultimately bounded (SGUUB). Finally, the simulation result is provided to demonstrate the validity of the presented method.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Bio-Inspired Cochlear Heterodyning Architecture for an RF Fovea We discuss the use of cochlear models for spectrum analysis at radio frequencies. We describe performance characteristics of such models, including noise, dynamic range, and frequency resolution. We show that the addition of phase information improves frequency estimation as compared to the use of amplitude information alone. In particular, the use of both amplitude and phase information in a novel nonlinear bio-inspired center-surround coincidence-detection stage simultaneously improves frequency estimation and implements a lowpass-to-bandpass transformation on cochlear outputs. In order to further improve frequency estimation we propose a novel wireless receiver architecture that is a broadband generalization of narrowband heterodyning systems commonly used in radio. We term this architecture cochlear heterodyning. It exploits the efficiency of cochlear spectrum analysis to perform parallel, multi-scale analysis of wideband signals and can be constructed with cochlea-like traveling-wave structures. When combined with our prior work on an RF cochlea, such architectures may be useful in cognitive radios for creating RF foveas that select narrowband components present within wideband, but spectrally sparse signals. The operation of RF foveas is analogous to how the eye foveates on narrow but interesting portions of an image. Analogies between spectrum analysis and the process of successive-subranging analog-to-digital conversion illustrate how successively finer frequency resolution is achieved in an RF fovea. Finally, we show that RF foveas can be used in feedback loops to perform interference cancellation.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A pipelined noise shaping coder for fractional-N frequency synthesis In this paper, we present the design considerations and implementation aspects of a pipelined all-digital fourth-order multi-stage-noise-shaping (MASH) delta-sigma (/spl utri//spl Sigma/) modulator suitable for fractional-N (F-N) phase-locked loop (PLL) frequency synthesis applications. In an effort to reduce the hardware complexity and power consumption, the alignment registers, which are normall...
Second and third-order successive requantizers for spurious tone reduction in low-noise fractional-N PLLs This paper presents 2 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">nd</sup> - and 3 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">rd</sup> -order digital requantizers which can be used as drop-in replacements for digital delta-sigma modulators in analog fractional-N PLLs to reduce fractional spurs. The requantizers are demonstrated and compared to conventional delta-sigma modulators in a low-noise 3.35 GHz PLL IC and shown to offer significant reductions in worst-case spurious tones with similar phase noise relative to their deltasigma modulator counterparts.
Folded Noise Prediction in Nonlinear Fractional-N Frequency Synthesizers The presence of nonlinearities in a fractional-N frequency synthesizer leads to the generation of an additional component of noise that appears in the output phase noise spectrum. This nonlinearity-induced noise component manifests itself as spurious tones and an elevated noise floor, also known as folded noise. This paper presents a mathematical analysis of the folded noise generated in fractiona...
An Alternative Analysis of Noise Folding in Fractional-N Synthesizers A new method of analyzing the effect of charge pump mismatches upon the phase noise of ΣΔ fractional-N synthesizers is proposed. This approach produces a simple, universal relationship between the mismatch and the ratio of the noise floor to the peak of the quantization noise spectrum. Simulations confirm that the ratio is relatively independent of other synthesizer parameters such as the ΣΔ modulator order, the shape of the spectrum, and the maximum phase excursion at the feedback divider output.
Second and Third-Order Noise Shaping Digital Quantizers for Low Phase Noise and Nonlinearity-Induced Spurious Tones in Fractional-N PLLs. Noise shaping digital quantizers, most commonly digital delta-sigma (ΔΣ) modulators, are used in fractional-N phase-locked loops (PLLs) to enable fractional frequency tuning. Unfortunately, their quantization noise is subjected to nonlinear distortion because of the PLL&#39;s inevitable non-ideal analog circuit behavior, which induces spurious tones in the PLL&#39;s phase error. Successive requantizers ha...
A modeling approach for Σ-Δ fractional-N frequency synthesizers allowing straightforward noise analysis A general model of phase-locked loops (PLLs) is derived which incorporates the influence of divide value variations. The proposed model allows straightforward noise and dynamic analyses of Σ-Δ fractional-N frequency synthesizers and other PLL applications in which the divide value is varied in time. Based on the derived model, a general parameterization is presented that further simplifies noise calculations. The framework is used to analyze the noise performance of a custom Σ-Δ synthesizer implemented in a 0.6 μm CMOS process, and accurately predicts the measured phase noise to within 3 dB over the entire frequency offset range spanning 25 kHz to 10 MHz.
Spurious Tone Suppression Techniques Applied to a Wide-Bandwidth 2.4 GHz Fractional- N PLL This paper demonstrates that spurious tones in the output of a fractional-N PLL can be reduced by replacing the DeltaSigma modulator with a new type of digital quantizer and adding a charge pump offset combined with a sampled loop filter. It describes the underlying mechanisms of the spurious tones, proposes techniques that mitigate the effects of the mechanisms, and presents a phase noise cancell...
Analysis and modeling of bang-bang clock and data recovery circuits A large-signal piecewise-linear model is proposed for bang-bang phase detectors that predicts characteristics of clock and data recovery circuits such as jitter transfer, jitter tolerance, and jitter generation. The results are validated by 1-Gb/s and 10-Gb/s CMOS prototypes using an Alexander phase detector and an LC oscillator.
Fully integrated wideband high-current rectifiers for inductively powered devices This paper describes the design and implementation of fully integrated rectifiers in BiCMOS and standard CMOS technologies for rectifying an externally generated RF carrier signal in inductively powered wireless devices, such as biomedical implants, radio-frequency identification (RFID) tags, and smartcards to generate an on-chip dc supply. Various full-wave rectifier topologies and low-power circuit design techniques are employed to decrease substrate leakage current and parasitic components, reduce the possibility of latch-up, and improve power transmission efficiency and high-frequency performance of the rectifier block. These circuits are used in wireless neural stimulating microsystems, fabricated in two processes: the University of Michigan's 3-μm 1M/2P N-epi BiCMOS, and the AMI 1.5-μm 2M/2P N-well standard CMOS. The rectifier areas are 0.12-0.48 mm2 in the above processes and they are capable of delivering >25mW from a receiver coil to the implant circuitry. The performance of these integrated rectifiers has been tested and compared, using carrier signals in 0.1-10-MHz range.
Compiler algorithms for synchronization Translating program loops into a parallel form is one of the most important transformations performed by concurrentizing compilers. This transformation often requires the insertion of synchronization instructions within the body of the concurrent loop. Several loop synchronization techniques are presented first. Compiler algorithms to generate synchronization instructions for singly-nested loops are then discussed. Finally, a technique for the elimination of redundant synchronization instructions is presented.
Dynamic sensor collaboration via sequential Monte Carlo We consider the application of sequential Monte Carlo (SMC) methods for Bayesian inference to the problem of information-driven dynamic sensor collaboration in clutter environments for sensor networks. The dynamics of the system under consideration are described by nonlinear sensing models within randomly deployed sensor nodes. The exact solution to this problem is prohibitively complex due to the nonlinear nature of the system. The SMC methods are, therefore, employed to track the probabilistic dynamics of the system and to make the corresponding Bayesian estimates and predictions. To meet the specific requirements inherent in sensor network, such as low-power consumption and collaborative information processing, we propose a novel SMC solution that makes use of the auxiliary particle filter technique for data fusion at densely deployed sensor nodes, and the collapsed kernel representation of the a posteriori distribution for information exchange between sensor nodes. Furthermore, an efficient numerical method is proposed for approximating the entropy-based information utility in sensor selection. It is seen that under the SMC framework, the optimal sensor selection and collaboration can be implemented naturally, and significant improvement is achieved over existing methods in terms of localizing and tracking accuracies.
Minimum-Cost Data Delivery in Heterogeneous Wireless Networks With various wireless technologies developed, a ubiquitous and integrated architecture is envisioned for future wireless communication. An important optimization issue in such an integrated system is how to minimize the overall communication cost by intelligently utilizing the available heterogeneous wireless technologies while, at the same time, meeting the quality-of-service requirements of mobi...
P2P-Based Service Distribution over Distributed Resources Dynamic or demand-driven service deployment in a Grid or Cloud environment is an important issue considering the varying nature of demand. Most distributed frameworks either offer static service deployment which results in resource allocation problems, or, are job-based where for each invocation, the job along with the data has to be transferred for remote execution resulting in increased communication cost. An alternative approach is dynamic demand-driven provisioning of services as proposed in earlier literature, but the proposed methods fail to account for the volatility of resources in a Grid environment. In this paper, we propose a unique peer-to-peer based approach for dynamic service provisioning which incorporates a Bit-Torrent like protocol for provisioning the service on a remote node. Being built around a P2P model, the proposed framework caters to resource volatility and also incurs lower provisioning cost.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.069412
0.066667
0.066667
0.033333
0.017659
0.012169
0.000653
0
0
0
0
0
0
0
Feedforward Categorization on AER Motion Events Using Cortex-Like Features in a Spiking Neural Network. This paper introduces an event-driven feedforward categorization system, which takes data from a temporal contrast address event representation (AER) sensor. The proposed system extracts bio-inspired cortex-like features and discriminates different patterns using an AER based tempotron classifier (a network of leaky integrate-and-fire spiking neurons). One of the system's most appealing characteristics is its event-driven processing, with both input and features taking the form of address events (spikes). The system was evaluated on an AER posture dataset and compared with two recently developed bio-inspired models. Experimental results have shown that it consumes much less simulation time while still maintaining comparable performance. In addition, experiments on the Mixed National Institute of Standards and Technology (MNIST) image dataset have demonstrated that the proposed system can work not only on raw AER data but also on images (with a preprocessing step to convert images into AER events) and that it can maintain competitive accuracy even when noise is added. The system was further evaluated on the MNIST dynamic vision sensor dataset (in which data is recorded using an AER dynamic vision sensor), with testing accuracy of 88.14%.
From few to many: illumination cone models for face recognition under variable lighting and pose We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render驴or synthesize驴images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone (based on Euclidean distance within the image space). We test our face recognition method on 4,050 images from the Yale Face Database B; these images contain 405 viewing conditions (9 poses $\times$ 45 illumination conditions) for 10 individuals. The method performs almost without error, except on the most extreme lighting directions, and significantly outperforms popular recognition methods that do not use a generative model.
Energy efficient parallel neuromorphic architectures with approximate arithmetic on FPGA. In this paper, we present the parallel neuromorphic processor architectures for spiking neural networks on FPGA. The proposed architectures address several critical issues pertaining to efficient parallelization of the update of membrane potentials, on-chip storage of synaptic weights and integration of approximate arithmetic units. The trade-offs between throughput, hardware cost and power overheads for different configurations are thoroughly investigated. Notably, for the application of handwritten digit recognition, a promising training speedup of 13.5x and a recognition speedup of 25.8x are achieved by a parallel implementation whose degree of parallelism is 32. In spite of the 120MHz operating frequency, the 32-way parallel hardware design demonstrates a 59.4x training speedup over the single-thread software program running on a 2.2GHz general purpose CPU. Equally importantly, by leveraging the built-in resilience of the neuromorphic architecture we demonstrate the energy benefit resulted from the use of approximate arithmetic computation. Up to 20% improvement in energy consumption is achieved by integrating approximate multipliers into the system while maintaining almost the same level of recognition rate achieved using standard multipliers. To the best of our knowledge, it is the first time that the approximate computing and parallel processing are applied to FPGA based spiking neural networks. The influence of the parallel processing on the benefits of approximate computing is also discussed in detail.
Scalable Digital Neuromorphic Architecture for Large-Scale Biophysically Meaningful Neural Network With Multi-Compartment Neurons. Multicompartment emulation is an essential step to enhance the biological realism of neuromorphic systems and to further understand the computational power of neurons. In this paper, we present a hardware efficient, scalable, and real-time computing strategy for the implementation of large-scale biologically meaningful neural networks with one million multi-compartment neurons (CMNs). The hardware platform uses four Altera Stratix III field-programmable gate arrays, and both the cellular and the network levels are considered, which provides an efficient implementation of a large-scale spiking neural network with biophysically plausible dynamics. At the cellular level, a cost-efficient multi-CMN model is presented, which can reproduce the detailed neuronal dynamics with representative neuronal morphology. A set of efficient neuromorphic techniques for single-CMN implementation are presented with all the hardware cost of memory and multiplier resources removed and with hardware performance of computational speed enhanced by 56.59% in comparison with the classical digital implementation method. At the network level, a scalable network-on-chip (NoC) architecture is proposed with a novel routing algorithm to enhance the NoC performance including throughput and computational latency, leading to higher computational efficiency and capability in comparison with state-of-the-art projects. The experimental results demonstrate that the proposed work can provide an efficient model and architecture for large-scale biologically meaningful networks, while the hardware synthesis results demonstrate low area utilization and high computational speed that supports the scalability of the approach.
Spike Counts based Low Complexity SNN Architecture with Binary Synapse. In this paper, we present an energy and area efficient spike neural network (SNN) processor based on novel spike counts based methods. For the low cost SNN design, we propose hardware-friendly complexity reduction techniques for both of learning and inferencing modes of operations. First, for the unsupervised learning process, we propose a spike counts based learning method. The novel learning app...
Spiking Neural Networks Hardware Implementations and Challenges: A Survey Neuromorphic computing is henceforth a major research field for both academic and industrial actors. As opposed to Von Neumann machines, brain-inspired processors aim at bringing closer the memory and the computational elements to efficiently evaluate machine learning algorithms. Recently, spiking neural networks, a generation of cognitive algorithms employing computational primitives mimicking neuron and synapse operational principles, have become an important part of deep learning. They are expected to improve the computational performance and efficiency of neural networks, but they are best suited for hardware able to support their temporal dynamics. In this survey, we present the state of the art of hardware implementations of spiking neural networks and the current trends in algorithm elaboration from model selection to training mechanisms. The scope of existing solutions is extensive; we thus present the general framework and study on a case-by-case basis the relevant particularities. We describe the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level and discuss their related advantages and challenges.
Gradient-Based Learning Applied to Document Recognition Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper rev...
RowClone: fast and energy-efficient in-DRAM bulk data copy and initialization Several system-level operations trigger bulk data copy or initialization. Even though these bulk data operations do not require any computation, current systems transfer a large quantity of data back and forth on the memory channel to perform such operations. As a result, bulk data operations consume high latency, bandwidth, and energy--degrading both system performance and energy efficiency. In this work, we propose RowClone, a new and simple mechanism to perform bulk copy and initialization completely within DRAM -- eliminating the need to transfer any data over the memory channel to perform such operations. Our key observation is that DRAM can internally and efficiently transfer a large quantity of data (multiple KBs) between a row of DRAM cells and the associated row buffer. Based on this, our primary mechanism can quickly copy an entire row of data from a source row to a destination row by first copying the data from the source row to the row buffer and then from the row buffer to the destination row, via two back-to-back activate commands. This mechanism, which we call the Fast Parallel Mode of RowClone, reduces the latency and energy consumption of a 4KB bulk copy operation by 11.6x and 74.4x, respectively, and a 4KB bulk zeroing operation by 6.0x and 41.5x, respectively. To efficiently copy data between rows that do not share a row buffer, we propose a second mode of RowClone, the Pipelined Serial Mode, which uses the shared internal bus of a DRAM chip to quickly copy data between two banks. RowClone requires only a 0.01% increase in DRAM chip area. We quantitatively evaluate the benefits of RowClone by focusing on fork, one of the frequently invoked system calls, and five other copy and initialization intensive applications. Our results show that RowClone can significantly improve both single-core and multi-core system performance, while also significantly reducing main memory bandwidth and energy consumption.
Distributed estimation and quantization An algorithm is developed for the design of a nonlinear, n-sensor, distributed estimation system subject to communication and computation constraints. The algorithm uses only bivariate probability distributions and yields locally optimal estimators that satisfy the required system constraints. It is shown that the algorithm is a generalization of the classical Lloyd-Max results
Estimating continuous distributions in Bayesian classifiers When modeling a probability distribution with a Bayesian network, we are faced with the problem of how to handle continuous variables. Most previous work has either solved the problem by discretizing, or assumed that the data are generated by a single Gaussian. In this paper we abandon the normality assumption and instead use statistical methods for nonparametric density estimation. For a naive Bayesian classifier, we present experimental results on a variety of natural and artificial domains, comparing two methods of density estimation: assuming normality and modeling each conditional distribution with a single Gaussian; and using nonparametric kernel density estimation. We observe large reductions in error on several natural and artificial data sets, which suggests that kernel estimation is a useful tool for learning Bayesian models.
The rainbow skip graph: a fault-tolerant constant-degree distributed data structure We present a distributed data structure, which we call the rainbow skip graph. To our knowledge, this is the first peer-to-peer data structure that simultaneously achieves high fault-tolerance, constant-sized nodes, and fast update and query times for ordered data. It is a non-trivial adaptation of the SkipNet/skip-graph structures of Harvey et al. and Aspnes and Shah, so as to provide fault-tolerance as these structures do, but to do so using constant-sized nodes, as in the family tree structure of Zatloukal and Harvey. It supports successor queries on a set of n items using O(log n) messages with high probability, an improvement over the expected O(log n) messages of the family tree. Our structure achieves these results by using the following new constructs:• Rainbow connections: parallel sets of pointers between related components of nodes, so as to achieve good connectivity between "adjacent" components, using constant-sized nodes.• Hydra components: highly-connected, highly fault-tolerant components of constant-sized nodes, which will contain relatively large connected subcomponents even under the failure of a constant fraction of the nodes in the component.We further augment the hydra components in the rainbow skip graph by using erasure-resilient codes to ensure that any large subcomponent of nodes in a hydra component is sufficient to reconstruct all the data stored in that component. By carefully maintaining the size of related components and hydra components to be O(log n), we are able to achieve fast times for updates and queries in the rainbow skip graph. In addition, we show how to make the communication complexity for updates and queries be worst case, at the expense of more conceptual complexity and a slight degradation in the node congestion of the data structure.
High-performance error amplifier for fast transient DC-DC converters. A new error amplifier is presented for fast transient response of dc-dc converters. The amplifier has low quiescent current to achieve high power conversion efficiency, but it can supply sufficient current during large-signal operation. Two comparators detect large-signal variations, and turn on extra current supplier if necessary. The amount of extra current is well controlled, so that the system...
Electromagnetic regenerative suspension system for ground vehicles This paper considers an electromagnetic regenerative suspension system (ERSS) that recovers the kinetic energy originated from vehicle vibration, which is previously dissipated in traditional shock absorbers. It can also be used as a controllable damper that can improve the vehicle's ride and handling performance. The proposed electromagnetic regenerative shock absorbers (ERSAs) utilize a linear or a rotational electromagnetic generator to convert the kinetic energy from suspension vibration into electricity, which can be used to reduce the load on the alternator so as to improve fuel efficiency. A complete ERSS is discussed here that includes the regenerative shock absorber, the power electronics for power regulation and suspension control, and an electronic control unit (ECU). Different shock absorber designs are proposed and compared for simplicity, efficiency, energy density, and controlled suspension performances. Both simulation and experiment results are presented and discussed.
A 1V 3.5 μW Bio-AFE With Chopper-Capacitor-Chopper Integrator-Based DSL and Low Power GM-C Filter This brief presents a low-noise, low-power bio-signal acquisition analog front-end (Bio-AFE). It mainly includes a capacitively coupled chopper-stabilized instrumentation amplifier (CCIA), a programmable gain amplifier (PGA), a low-pass filter (LPF), and a successive approximation analog to digital converter (SAR ADC). A chopper-capacitor-chopper integrator based DC servo loop (C3IB-DSL...
1.054979
0.05
0.05
0.05
0.05
0.025
0.001856
0
0
0
0
0
0
0
Ristretto: An Atomized Processing Architecture for Sparsity-Condensed Stream Flow in CNN Low-precision quantization and sparsity have been widely explored in CNN acceleration due to their effectiveness in reducing computational complexity and memory requirements. However, to support variable numerical precision and sparse computation, prior accelerators design flexible multipliers or sparse dataflow separately. A uniform solution that simultaneously exploits mixed-precision and dual-sided irregular sparsity for CNN acceleration is still lacking. Through an in-depth review of existing precision-scalable and sparse accelerators, we observe that a direct combination of low-level multipliers and high-level sparse dataflow from both sides is challenging due to their orthogonal design spaces. To this end, in this paper, we propose condensed streaming computation. By representing non-zero weights and activations as atomized streams, the low-level mixed-precision multiplication and high-level sparse convolution can be unified into a shared dataflow through hierarchical data reuse. Based on the condensed streaming computation, we propose Ristretto, an atomized architecture that exploits both mixed-precision and dual-sided irregular sparsity for CNN inference. We implement Ristretto in a 28nm technology node. Extensive evaluations show that Ristretto consistently outperforms three state-of-the-art CNN accelerators, including Bit Fusion, Laconic, and SparTen, in terms of performance and energy efficiency.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Sub-0.25-pJ/bit 47.6-to-58.8-Gb/s Reference-Less FD-Less Single-Loop PAM-4 Bang-Bang CDR With a Deliberate-Current-Mismatch Frequency Acquisition Technique in 28-nm CMOS This article reports a half-rate single-loop bang-bang clock and data recovery (BBCDR) circuit without the need of reference and frequency detector (FD). Specifically, we propose a deliberate-current-mismatch charge-pump pair to enable fast and robust frequency acquisition without identifying the frequency error polarity. This technique eliminates the need for a complex high-speed data or clock pa...
A Design Procedure for All-Digital Phase-Locked Loops Based on a Charge-Pump Phase-Locked-Loop Analogy In this brief, a systematic design procedure for a second-order all-digital phase-locked loop (PLL) is proposed. The design procedure is based on the analogy between a type-II second-order analog PLL and an all-digital PLL. The all-digital PLL design inherits the frequency response and stability charac- teristics of the analog prototype PLL. Index Terms—All-digital phase-locked loop (PLL), bilinear transform, digital loop filter, digitally controlled oscillator.
Modeling and Design of Multilevel Bang–Bang CDRs in the Presence of ISI and Noise Multilevel clock-and-data recovery (CDR) systems are analyzed, modeled, and designed. A stochastic analysis provides probability density functions that are used to estimate the effect of intersymbol interference (ISI) and additive white noise on the characteristics of the phase detector (PD) in the CDR. A slope detector based novel multilevel bang-bang CDR architecture is proposed and modeled usin...
A 16 Gb/s 3.7 mW/Gb/s 8-Tap DFE Receiver and Baud-Rate CDR With 31 kppm Tracking Bandwidth A 16 Gb/s I/O link receiver fabricated in 22 nm CMOS SOI technology is presented. Attenuation and ISI of transmitted NRZ data across PCB channels are equalized with a CTLE feeding an 8-tap DFE. The first tap uses digital speculation and the following seven taps are realized by means of the switched-capacitor technique. Timing recovery and control are performed with a Mueller-Müller type-A baud-rate CDR. The architecture is half-rate and requires one phase rotator. In total, each slice has six comparators to recover data and timing information. The secondorder digital CDR operates at quarter-rate and features a low-latency implementation of the proportional path. At 16 Gb/s, 1 Vppd PRBS31 data transmitted without FFE equalization is recovered across a PCB channel with 34 dB attenuation at 8 GHz. The measured tracking bandwidth is 31 kppm (16 GHz ± 496 MHz), and an amplitude of 3 UIPP is tolerated at 1 MHz sinusoidal jitter. The sinusoidal jitter amplitude tolerance measured at 10 Gb/s is 0.4 UIPP at 10 MHz and remains above 0.2 UIPP up to 1 GHz with PRBS31 data recovered (BER <; 10-12) across a PCB channel with 27 dB attenuation at 5 GHz. The power efficiency is 3.7 mW/Gb/s, including the full-rate clock receiver.
Jitter-Power Trade-Offs in PLLs As new applications impose jitter values in the range of a few tens of femtoseconds, the design of phase-locked loops faces daunting challenges. This paper derives basic relations between the tolerable jitter and the power consumption, predicting severe issues as jitters below 10 fs are sought. The results are also applied to the sampling clocks in analog-to-digital converters and suggest that clock generation may consume a greater power than the converter itself.
Bird'S-Eye View Of Analog And Mixed-Signal Chips For The 21st Century The Internet of Everything (IoE), clearly a 21st century's technology, brilliantly plays with digital data obtained from analog sources, bringing together two different realities, the analog (physical/real), and the digital (cyber/virtual) worlds. Then, with the boundaries of IoE still analog in nature, the required functions at the interface involve sensing, measuring, filtering, converting, processing, and connecting, which imply that the analog layer governs the entire system in terms of accuracy and precision. Furthermore, such interface integrates several analog and mixed-signal subsystems that comprise mainly signal transmission and reception, frequency generation, energy harvesting, data, and power conversion. This paper sets forth a state-of-the-art design perspective of some of the most critical building blocks used in the analog/digital interface, covering wireless cellular transceivers, millimeter-wave frequency generators, energy harvesting interfaces, plus, data and power converters, that exhibit high quality performance achieved through low-power consumption, high energy-efficiency, and high speed.
Advancing Data Weighted Averaging Technique for Multi-Bit Sigma–Delta Modulators Multibit sigma-delta modulators which employ the data weighted averaging (DWA) technique are plagued by base-band tone problems. The existing DWA-like techniques for solving these problems are categorized in this brief as tone-suppressing and tone-transferring techniques. Although tone-transferring techniques can achieve a better signal-to-noise-plus-distortion ratio than tone-suppressing techniqu...
A Second-Order Noise-Shaping SAR ADC With Passive Integrator and Tri-Level Voting This paper presents a low-power and scaling-friendly noise-shaping (NS) SAR ADC. Instead of using operational transconductance amplifiers that are power hungry and scaling unfriendly, the proposed architecture uses passive switches and capacitors to perform residue integration and realizes the path gains via transistor size ratios inside a multi-path dynamic comparator. The overall architecture is simple and robust. Since the noise transfer function is set by component ratios, it is insensitive to process, voltage, and temperature (PVT) variations. Besides the proposed architecture, this paper also presents two new circuit techniques. A tri-level voting scheme is proposed to reduce the comparator noise. It outperforms the majority voting technique by exploiting more information in the comparator output statistics and providing an extra decision level. A dynamic multi-phase clock generator is also proposed to guarantee non-overlapping and support an arbitrary number of phases. A prototype 9-bit NS-SAR ADC is fabricated in a 40-nm CMOS process. It consumes <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$143~\mu \text{W}$ </tex-math></inline-formula> at 1.1 V while operating at 8.4 MS/s. Taking advantage of the second-order NS, it achieves a peak SNDR of 78.4 dB over a bandwidth of 262 kHz at the oversampling ratio of 16, leading to an SNDR-based Schreier figure of merit (FoM) of 171 dB.
A Linear Representation of Dynamics of Boolean Networks A new matrix product, called semi-tensor product of matrices, is reviewed. Using it, a matrix expression of logic is proposed, where a logical variable is expressed as a vector, a logical function is expressed as a multiple linear mapping. Under this framework, a Boolean network equation is converted into an equivalent algebraic form as a conventional discrete-time linear system. Analyzing the transition matrix of the linear system, formulas are obtained to show a) the number of fixed points; b) the numbers of cycles of different lengths; c) transient period, for all points to enter the set of attractors; and d) basin of each attractor. The corresponding algorithms are developed and used to some examples.
A study of phase noise in CMOS oscillators This paper presents a study of phase noise in two inductorless CMOS oscillators. First-order analysis of a linear oscillatory system leads to a noise shaping function and a new definition of . A linear model of CMOS ring oscillators is used to calculate their phase noise, and three phase noise phenomena, namely, additive noise, high-frequency multiplicative noise, and low-frequency multiplicative noise, are identified and formulated. Based on the same concepts, a CMOS relaxation oscillator is also analyzed. Issues and techniques related to simulation of noise in the time domain are described, and two prototypes fabricated in a 0.5- m CMOS technology are used to investigate the accuracy of the theoretical predictions. Compared with the measured results, the calculated phase noise values of a 2-GHz ring oscillator and a 900-MHz relaxation oscillator at 5 MHz offset have an error of approximately 4 dB. OLTAGE-CONTROLLED oscillators (VCO's) are an integral part of phase-locked loops, clock recovery cir- cuits, and frequency synthesizers. Random fluctuations in the output frequency of VCO's, expressed in terms of jitter and phase noise, have a direct impact on the timing accuracy where phase alignment is required and on the signal-to-noise ratio where frequency translation is performed. In particular, RF oscillators employed in wireless tranceivers must meet stringent phase noise requirements, typically mandating the use of passive LC tanks with a high quality factor . However, the trend toward large-scale integration and low cost makes it desirable to implement oscillators monolithically. The paucity of literature on noise in such oscillators together with a lack of experimental verification of underlying theories has motivated this work. This paper provides a study of phase noise in two induc- torless CMOS VCO's. Following a first-order analysis of a linear oscillatory system and introducing a new definition of , we employ a linearized model of ring oscillators to obtain an estimate of their noise behavior. We also describe the limitations of the model, identify three mechanisms leading to phase noise, and use the same concepts to analyze a CMOS relaxation oscillator. In contrast to previous studies where time-domain jitter has been investigated (1), (2), our analysis is performed in the frequency domain to directly determine the phase noise. Experimental results obtained from a 2-GHz ring oscillator and a 900-MHz relaxation oscillator indicate that, despite many simplifying approximations, lack of accurate MOS models for RF operation, and the use of simple noise
The rainbow skip graph: a fault-tolerant constant-degree distributed data structure We present a distributed data structure, which we call the rainbow skip graph. To our knowledge, this is the first peer-to-peer data structure that simultaneously achieves high fault-tolerance, constant-sized nodes, and fast update and query times for ordered data. It is a non-trivial adaptation of the SkipNet/skip-graph structures of Harvey et al. and Aspnes and Shah, so as to provide fault-tolerance as these structures do, but to do so using constant-sized nodes, as in the family tree structure of Zatloukal and Harvey. It supports successor queries on a set of n items using O(log n) messages with high probability, an improvement over the expected O(log n) messages of the family tree. Our structure achieves these results by using the following new constructs:• Rainbow connections: parallel sets of pointers between related components of nodes, so as to achieve good connectivity between "adjacent" components, using constant-sized nodes.• Hydra components: highly-connected, highly fault-tolerant components of constant-sized nodes, which will contain relatively large connected subcomponents even under the failure of a constant fraction of the nodes in the component.We further augment the hydra components in the rainbow skip graph by using erasure-resilient codes to ensure that any large subcomponent of nodes in a hydra component is sufficient to reconstruct all the data stored in that component. By carefully maintaining the size of related components and hydra components to be O(log n), we are able to achieve fast times for updates and queries in the rainbow skip graph. In addition, we show how to make the communication complexity for updates and queries be worst case, at the expense of more conceptual complexity and a slight degradation in the node congestion of the data structure.
Clocking Analysis, Implementation and Measurement Techniques for High-Speed Data Links—A Tutorial The performance of high-speed wireline data links depend crucially on the quality and precision of their clocking infrastructure. For future applications, such as microprocessor systems that require terabytes/s of aggregate bandwidth, signaling system designers will have to become even more aware of detailed clock design tradeoffs in order to jointly optimize I/O power, bandwidth, reliability, silicon area and testability. The goal of this tutorial is to assist I/O circuit and system designers in developing intuitive and practical understanding of I/O clocking tradeoffs at all levels of the link hierarchy from the circuit-level implementation to system-level architecture.
Exploration of Constantly Connected Dynamic Graphs Based on Cactuses. We study the problem of exploration by a mobile entity (agent) of a class of dynamic networks, namely constantly connected dynamic graphs. This problem has already been studied in the case where the agent knows the dynamics of the graph and the underlying graph is a ring of n vertices [5]. In this paper, we consider the same problem and we suppose that the underlying graph is a cactus graph (a connected graph in which any two simple cycles have at most one vertex in common). We propose an algorithm that allows the agent to explore these dynamic graphs in at most 2(O)(root log n)(n) time units. We show that the lower bound of the algorithm is 2(Omega)(root log n)(n) time units.
Robust Biopotential Acquisition via a Distributed Multi-Channel FM-ADC. This contribution presents an active electrode system for biopotential acquisition using a distributed multi-channel FM-modulated analog front-end and ADC architecture. Each electrode captures one biopotential signal and converts to a frequency modulated signal using a VCO tuned to a unique frequency. Each electrode then buffers its output onto a shared analog line that aggregates all of the FM-mo...
1.11
0.1
0.1
0.1
0.07
0.06
0.01
0.002
0
0
0
0
0
0
A Charge-Sharing IIR Filter With Linear Interpolation and High Stopband Rejection This article introduces a new discrete-time (DT) charge-sharing (CS) low-pass filter (LPF) that achieves high-order filtering and improves its stopband rejection while maintaining a reasonable duty cycle of the main clock at 20%. It proposes two key innovations: 1) a linear interpolation of the sampling capacitor and 2) a charge re-circulation of the history capacitors for deep stopband rejection. Fabricated in 28-nm CMOS, the proposed IIR LPF demonstrates a 1–9.9-MHz bandwidth (BW) programmability and achieves a record-high 120-dB stopband rejection at 100 MHz while consuming merely 0.92 mW. The in/out-of-band IIP3 is +17.7 dBm/+26.6 dBm, and the input-referred noise is 3.5 nV/ <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\sqrt {\mathrm{ Hz}}$ </tex-math></inline-formula> .
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Using the complementary nature of node joining and leaving to handle churn problem in P2P networks Churn is a basic and inherent problem in P2P networks. A lot of relevant studies have been carried out, but all lack versatility. In this paper, a general solution is proposed which makes a peer-to-peer (P2P) network need not pay much attention to churn problem by introducing a logic layer named Dechurn, and most of churn could be eliminated in the Dechurn layer. For utilizing the complementary nature of node joining and leaving, a network scheme, named Constellation, for handling churn is designed on the Dechurn layer through which the resources cached in a node for its spouse node who has left network would be succeeded by a node in latent period. The simulation results indicate that the proposed solution is effective and efficient in handling churn and easy to put into practice.
Discovery of stable peers in a self-organising peer-to-peer gradient topology Peer-to-peer (P2P) systems are characterised by a wide disparity in peer resources and capabilities. In particular, a number of measurements on deployed P2P systems show that peer stability (e.g. uptime) varies by several orders of magnitude between peers. In this paper, we introduce a peer utility metric and construct a self-organising P2P topology based on this metric that allows the efficient discovery of stable peers in the system. We propose and evaluate a search algorithm and we show that it achieves significantly better performance than random walking. Our approach can be used by certain classes of applications to improve the availability and performance of system services by placing them on the most stable peers, as well as to reduce the amount of network traffic required to discover and use these services. As a proof-of-concept, we demonstrate the design of a naming service on the gradient topology.
An adaptive stabilization framework for distributed hash tables Distributed Hash Tables (DHT) algorithms obtain good lookup performance bounds by using deterministic rules to organize peer nodes into an overlay network. To preserve the invariants of the overlay network, DHTs use stabilization procedures that reorganize the topology graph when participating nodes join or fail. Most DHTs use periodic stabilization, in which peers perform stabilization at fixed intervals of time, disregarding the rate of change in overlay topology; this may lead to poor performance and large stabilization-induced communication overhead. We propose a novel adaptive stabilization framework that takes into consideration the continuous evolution in network conditions. Each peer collects statistical data about the network and dynamically adjusts its stabilization rate based on the analysis of the data. The objective of our scheme is to maintain nominal network performance and to minimize the communication overhead of stabilization.
D1HT: a distributed one hop hash table Distributed Hash Tables (DHTs) have been used in a variety of applications, but most DHTs so far have opted to solve lookups with multiple hops, which sacrifices performance in order to keep little routing information and minimize maintenance traffic. In this paper, we introduce D1HT, a novel single hop DHT that is able to maximize performance with reasonable maintenance traffic overhead even for huge and dynamic peer-to-peer (P2P) systems. We formally define the algorithm we propose to detect and notify any membership change in the system, prove its correctness and performance properties, and present a Quarantine-like mechanism to reduce the overhead caused by volatile peers. Our analyses show that D1HT has reasonable maintenance bandwidth requirements even for very large systems, while presenting at least twice less bandwidth overhead than previous single hop DHT.
SKIP + : A Self-Stabilizing Skip Graph Peer-to-peer systems rely on a scalable overlay network that enables efficient routing between its members. Hypercubic topologies facilitate such operations while each node only needs to connect to a small number of other nodes. In contrast to static communication networks, peer-to-peer networks allow nodes to adapt their neighbor set over time in order to react to join and leave events and failures. This article shows how to maintain such networks in a robust manner. Concretely, we present a distributed and self-stabilizing algorithm that constructs a (slightly extended) skip graph, SKIP+, in polylogarithmic time from any given initial state in which the overlay network is still weakly connected. This is an exponential improvement compared to previously known self-stabilizing algorithms for overlay networks. In addition, our algorithm handles individual joins and leaves locally and efficiently.
Exploiting Node Connection Regularity for DHT Replication Distributed Hash-Tables (DHTs) provide an efficient way to store objects in large-scale peer-to-peer systems. To guarantee that objects are reliably stored, DHTs rely on replication. Several replication strategies have been proposed in the last years. The most efficient ones use predictions about the availability of nodes to reduce the number of object migrations that need to be performed: objects are preferably stored on highly available nodes. This paper proposes an alternative replication strategy. Rather than exploiting highly available nodes, we propose to leverage nodes that exhibit regularity in their connection pattern. Roughly speaking, the strategy consists in replicating each object on a set of nodes that is built in such a way that, with high probability, at any time, there are always at least $k$ nodes in the set that are available. We evaluate this replication strategy using traces of two real-world systems: eDonkey and Skype. The evaluation shows that our regularity-based replication strategy induces a systematically lower network usage than existing state of the art replication strategies.
Flat and hierarchical epidemics in P2P systems: Energy cost models and analysis In large scale distributed systems, epidemic or gossip-based communication mechanisms are preferred for their ease of deployment, simplicity, robustness against failures, load-balancing and limited resource usage. Although they have extensive applicability, there is no prior work on developing energy cost models for epidemic distributed mechanisms. In this study, we address power awareness features of two main groups of epidemics, namely flat and hierarchical. We propose a dominating-set based and power-aware hierarchical epidemic approach that eliminates a significant number of peers from gossiping. To the best of our knowledge, using a dominating set to build a hierarchy for epidemic communication and provide energy efficiency in P2P systems is a novel approach. We develop energy cost model formulations for flat and hierarchical epidemics. In contrast to the prior works, our study is the first one that proposes energy cost models for generic peers using epidemic communication, and examines the effect of protocol parameters to characterize energy consumption. As a case study protocol, we use our epidemic protocol ProFID for frequent items discovery in P2P systems. By means of extensive large scale simulations on PeerSim, we analyze the effect of protocol parameters on energy consumption, compare flat and hierarchical epidemic approaches for efficiency, scalability, and applicability as well as investigate their resilience under realistic churn.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Deep learning Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech. Machine-learning technology powers many aspects of modern society: from web searches to content filtering on social networks to recommendations on e-commerce websites, and it is increasingly present in consumer products such as cameras and smartphones. Machine-learning systems are used to identify objects in images, transcribe speech into text, match news items, posts or products with users' interests, and select relevant results of search. Increasingly, these applications make use of a class of techniques called deep learning. Conventional machine-learning techniques were limited in their ability to process natural data in their raw form. For decades, constructing a pattern-recognition or machine-learning system required careful engineering and considerable domain expertise to design a feature extractor that transformed the raw data (such as the pixel values of an image) into a suitable internal representation or feature vector from which the learning subsystem, often a classifier, could detect or classify patterns in the input. Representation learning is a set of methods that allows a machine to be fed with raw data and to automatically discover the representations needed for detection or classification. Deep-learning methods are representation-learning methods with multiple levels of representation, obtained by composing simple but non-linear modules that each transform the representation at one level (starting with the raw input) into a representation at a higher, slightly more abstract level. With the composition of enough such transformations, very complex functions can be learned. For classification tasks, higher layers of representation amplify aspects of the input that are important for discrimination and suppress irrelevant variations. An image, for example, comes in the form of an array of pixel values, and the learned features in the first layer of representation typically represent the presence or absence of edges at particular orientations and locations in the image. The second layer typically detects motifs by spotting particular arrangements of edges, regardless of small variations in the edge positions. The third layer may assemble motifs into larger combinations that correspond to parts of familiar objects, and subsequent layers would detect objects as combinations of these parts. The key aspect of deep learning is that these layers of features are not designed by human engineers: they are learned from data using a general-purpose learning procedure. Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years. It has turned out to be very good at discovering intricate structures in high-dimensional data and is therefore applicable to many domains of science, business and government. In addition to beating records in image recognition1, 2, 3, 4 and speech recognition5, 6, 7, it has beaten other machine-learning techniques at predicting the activity of potential drug molecules8, analysing particle accelerator data9, 10, reconstructing brain circuits11, and predicting the effects of mutations in non-coding DNA on gene expression and disease12, 13. Perhaps more surprisingly, deep learning has produced extremely promising results for various tasks in natural language understanding14, particularly topic classification, sentiment analysis, question answering15 and language translation16, 17. We think that deep learning will have many more successes in the near future because it requires very little engineering by hand, so it can easily take advantage of increases in the amount of available computation and data. New learning algorithms and architectures that are currently being developed for deep neural networks will only accelerate this progress. The most common form of machine learning, deep or not, is supervised learning. Imagine that we want to build a system that can classify images as containing, say, a house, a car, a person or a pet. We first collect a large data set of images of houses, cars, people and pets, each labelled with its category. During training, the machine is shown an image and produces an output in the form of a vector of scores, one for each category. We want the desired category to have the highest score of all categories, but this is unlikely to happen before training. We compute an objective function that measures the error (or distance) between the output scores and the desired pattern of scores. The machine then modifies its internal adjustable parameters to reduce this error. These adjustable parameters, often called weights, are real numbers that can be seen as 'knobs' that define the input–output function of the machine. In a typical deep-learning system, there may be hundreds of millions of these adjustable weights, and hundreds of millions of labelled examples with which to train the machine. To properly adjust the weight vector, the learning algorithm computes a gradient vector that, for each weight, indicates by what amount the error would increase or decrease if the weight were increased by a tiny amount. The weight vector is then adjusted in the opposite direction to the gradient vector. The objective function, averaged over all the training examples, can be seen as a kind of hilly landscape in the high-dimensional space of weight values. The negative gradient vector indicates the direction of steepest descent in this landscape, taking it closer to a minimum, where the output error is low on average. In practice, most practitioners use a procedure called stochastic gradient descent (SGD). This consists of showing the input vector for a few examples, computing the outputs and the errors, computing the average gradient for those examples, and adjusting the weights accordingly. The process is repeated for many small sets of examples from the training set until the average of the objective function stops decreasing. It is called stochastic because each small set of examples gives a noisy estimate of the average gradient over all examples. This simple procedure usually finds a good set of weights surprisingly quickly when compared with far more elaborate optimization techniques18. After training, the performance of the system is measured on a different set of examples called a test set. This serves to test the generalization ability of the machine — its ability to produce sensible answers on new inputs that it has never seen during training. Many of the current practical applications of machine learning use linear classifiers on top of hand-engineered features. A two-class linear classifier computes a weighted sum of the feature vector components. If the weighted sum is above a threshold, the input is classified as belonging to a particular category. Since the 1960s we have known that linear classifiers can only carve their input space into very simple regions, namely half-spaces separated by a hyperplane19. But problems such as image and speech recognition require the input–output function to be insensitive to irrelevant variations of the input, such as variations in position, orientation or illumination of an object, or variations in the pitch or accent of speech, while being very sensitive to particular minute variations (for example, the difference between a white wolf and a breed of wolf-like white dog called a Samoyed). At the pixel level, images of two Samoyeds in different poses and in different environments may be very different from each other, whereas two images of a Samoyed and a wolf in the same position and on similar backgrounds may be very similar to each other. A linear classifier, or any other 'shallow' classifier operating on raw pixels could not possibly distinguish the latter two, while putting the former two in the same category. This is why shallow classifiers require a good feature extractor that solves the selectivity–invariance dilemma — one that produces representations that are selective to the aspects of the image that are important for discrimination, but that are invariant to irrelevant aspects such as the pose of the animal. To make classifiers more powerful, one can use generic non-linear features, as with kernel methods20, but generic features such as those arising with the Gaussian kernel do not allow the learner to generalize well far from the training examples21. The conventional option is to hand design good feature extractors, which requires a considerable amount of engineering skill and domain expertise. But this can all be avoided if good features can be learned automatically using a general-purpose learning procedure. This is the key advantage of deep learning. A deep-learning architecture is a multilayer stack of simple modules, all (or most) of which are subject to learning, and many of which compute non-linear input–output mappings. Each module in the stack transforms its input to increase both the selectivity and the invariance of the representation. With multiple non-linear layers, say a depth of 5 to 20, a system can implement extremely intricate functions of its inputs that are simultaneously sensitive to minute details — distinguishing Samoyeds from white wolves — and insensitive to large irrelevant variations such as the background, pose, lighting and surrounding objects. From the earliest days of pattern recognition22, 23, the aim of researchers has been to replace hand-engineered features with trainable multilayer networks, but despite its simplicity, the solution was not widely understood until the mid 1980s. As it turns out, multilayer architectures can be trained by simple stochastic gradient descent. As long as the modules are relatively smooth functions of their inputs and of their internal weights, one can compute gradients using the backpropagation procedure. The idea that this could be done, and that it worked, was discovered independently by several different groups during the 1970s and 1980s24, 25, 26, 27. The backpropagation procedure to compute the gradient of an objective function with respect to the weights of a multilayer stack of modules is nothing more than a practical application of the chain rule for derivatives. The key insight is that the derivative (or gradient) of the objective with respect to the input of a module can be computed by working backwards from the gradient with respect to the output of that module (or the input of the subsequent module) (Fig. 1). The backpropagation equation can be applied repeatedly to propagate gradients through all modules, starting from the output at the top (where the network produces its prediction) all the way to the bottom (where the external input is fed). Once these gradients have been computed, it is straightforward to compute the gradients with respect to the weights of each module. Many applications of deep learning use feedforward neural network architectures (Fig. 1), which learn to map a fixed-size input (for example, an image) to a fixed-size output (for example, a probability for each of several categories). To go from one layer to the next, a set of units compute a weighted sum of their inputs from the previous layer and pass the result through a non-linear function. At present, the most popular non-linear function is the rectified linear unit (ReLU), which is simply the half-wave rectifier f(z) = max(z, 0). In past decades, neural nets used smoother non-linearities, such as tanh(z) or 1/(1 + exp(−z)), but the ReLU typically learns much faster in networks with many layers, allowing training of a deep supervised network without unsupervised pre-training28. Units that are not in the input or output layer are conventionally called hidden units. The hidden layers can be seen as distorting the input in a non-linear way so that categories become linearly separable by the last layer (Fig. 1). In the late 1990s, neural nets and backpropagation were largely forsaken by the machine-learning community and ignored by the computer-vision and speech-recognition communities. It was widely thought that learning useful, multistage, feature extractors with little prior knowledge was infeasible. In particular, it was commonly thought that simple gradient descent would get trapped in poor local minima — weight configurations for which no small change would reduce the average error. In practice, poor local minima are rarely a problem with large networks. Regardless of the initial conditions, the system nearly always reaches solutions of very similar quality. Recent theoretical and empirical results strongly suggest that local minima are not a serious issue in general. Instead, the landscape is packed with a combinatorially large number of saddle points where the gradient is zero, and the surface curves up in most dimensions and curves down in the remainder29, 30. The analysis seems to show that saddle points with only a few downward curving directions are present in very large numbers, but almost all of them have very similar values of the objective function. Hence, it does not much matter which of these saddle points the algorithm gets stuck at. Interest in deep feedforward networks was revived around 2006 (refs 31,32,33,34) by a group of researchers brought together by the Canadian Institute for Advanced Research (CIFAR). The researchers introduced unsupervised learning procedures that could create layers of feature detectors without requiring labelled data. The objective in learning each layer of feature detectors was to be able to reconstruct or model the activities of feature detectors (or raw inputs) in the layer below. By 'pre-training' several layers of progressively more complex feature detectors using this reconstruction objective, the weights of a deep network could be initialized to sensible values. A final layer of output units could then be added to the top of the network and the whole deep system could be fine-tuned using standard backpropagation33, 34, 35. This worked remarkably well for recognizing handwritten digits or for detecting pedestrians, especially when the amount of labelled data was very limited36. The first major application of this pre-training approach was in speech recognition, and it was made possible by the advent of fast graphics processing units (GPUs) that were convenient to program37 and allowed researchers to train networks 10 or 20 times faster. In 2009, the approach was used to map short temporal windows of coefficients extracted from a sound wave to a set of probabilities for the various fragments of speech that might be represented by the frame in the centre of the window. It achieved record-breaking results on a standard speech recognition benchmark that used a small vocabulary38 and was quickly developed to give record-breaking results on a large vocabulary task39. By 2012, versions of the deep net from 2009 were being developed by many of the major speech groups6 and were already being deployed in Android phones. For smaller data sets, unsupervised pre-training helps to prevent overfitting40, leading to significantly better generalization when the number of labelled examples is small, or in a transfer setting where we have lots of examples for some 'source' tasks but very few for some 'target' tasks. Once deep learning had been rehabilitated, it turned out that the pre-training stage was only needed for small data sets. There was, however, one particular type of deep, feedforward network that was much easier to train and generalized much better than networks with full connectivity between adjacent layers. This was the convolutional neural network (ConvNet)41, 42. It achieved many practical successes during the period when neural networks were out of favour and it has recently been widely adopted by the computer-vision community. ConvNets are designed to process data that come in the form of multiple arrays, for example a colour image composed of three 2D arrays containing pixel intensities in the three colour channels. Many data modalities are in the form of multiple arrays: 1D for signals and sequences, including language; 2D for images or audio spectrograms; and 3D for video or volumetric images. There are four key ideas behind ConvNets that take advantage of the properties of natural signals: local connections, shared weights, pooling and the use of many layers. The architecture of a typical ConvNet (Fig. 2) is structured as a series of stages. The first few stages are composed of two types of layers: convolutional layers and pooling layers. Units in a convolutional layer are organized in feature maps, within which each unit is connected to local patches in the feature maps of the previous layer through a set of weights called a filter bank. The result of this local weighted sum is then passed through a non-linearity such as a ReLU. All units in a feature map share the same filter bank. Different feature maps in a layer use different filter banks. The reason for this architecture is twofold. First, in array data such as images, local groups of values are often highly correlated, forming distinctive local motifs that are easily detected. Second, the local statistics of images and other signals are invariant to location. In other words, if a motif can appear in one part of the image, it could appear anywhere, hence the idea of units at different locations sharing the same weights and detecting the same pattern in different parts of the array. Mathematically, the filtering operation performed by a feature map is a discrete convolution, hence the name. Although the role of the convolutional layer is to detect local conjunctions of features from the previous layer, the role of the pooling layer is to merge semantically similar features into one. Because the relative positions of the features forming a motif can vary somewhat, reliably detecting the motif can be done by coarse-graining the position of each feature. A typical pooling unit computes the maximum of a local patch of units in one feature map (or in a few feature maps). Neighbouring pooling units take input from patches that are shifted by more than one row or column, thereby reducing the dimension of the representation and creating an invariance to small shifts and distortions. Two or three stages of convolution, non-linearity and pooling are stacked, followed by more convolutional and fully-connected layers. Backpropagating gradients through a ConvNet is as simple as through a regular deep network, allowing all the weights in all the filter banks to be trained. Deep neural networks exploit the property that many natural signals are compositional hierarchies, in which higher-level features are obtained by composing lower-level ones. In images, local combinations of edges form motifs, motifs assemble into parts, and parts form objects. Similar hierarchies exist in speech and text from sounds to phones, phonemes, syllables, words and sentences. The pooling allows representations to vary very little when elements in the previous layer vary in position and appearance. The convolutional and pooling layers in ConvNets are directly inspired by the classic notions of simple cells and complex cells in visual neuroscience43, and the overall architecture is reminiscent of the LGN–V1–V2–V4–IT hierarchy in the visual cortex ventral pathway44. When ConvNet models and monkeys are shown the same picture, the activations of high-level units in the ConvNet explains half of the variance of random sets of 160 neurons in the monkey's inferotemporal cortex45. ConvNets have their roots in the neocognitron46, the architecture of which was somewhat similar, but did not have an end-to-end supervised-learning algorithm such as backpropagation. A primitive 1D ConvNet called a time-delay neural net was used for the recognition of phonemes and simple words47, 48. There have been numerous applications of convolutional networks going back to the early 1990s, starting with time-delay neural networks for speech recognition47 and document reading42. The document reading system used a ConvNet trained jointly with a probabilistic model that implemented language constraints. By the late 1990s this system was reading over 10% of all the cheques in the United States. A number of ConvNet-based optical character recognition and handwriting recognition systems were later deployed by Microsoft49. ConvNets were also experimented with in the early 1990s for object detection in natural images, including faces and hands50, 51, and for face recognition52. Since the early 2000s, ConvNets have been applied with great success to the detection, segmentation and recognition of objects and regions in images. These were all tasks in which labelled data was relatively abundant, such as traffic sign recognition53, the segmentation of biological images54 particularly for connectomics55, and the detection of faces, text, pedestrians and human bodies in natural images36, 50, 51, 56, 57, 58. A major recent practical success of ConvNets is face recognition59. Importantly, images can be labelled at the pixel level, which will have applications in technology, including autonomous mobile robots and self-driving cars60, 61. Companies such as Mobileye and NVIDIA are using such ConvNet-based methods in their upcoming vision systems for cars. Other applications gaining importance involve natural language understanding14 and speech recognition7. Despite these successes, ConvNets were largely forsaken by the mainstream computer-vision and machine-learning communities until the ImageNet competition in 2012. When deep convolutional networks were applied to a data set of about a million images from the web that contained 1,000 different classes, they achieved spectacular results, almost halving the error rates of the best competing approaches1. This success came from the efficient use of GPUs, ReLUs, a new regularization technique called dropout62, and techniques to generate more training examples by deforming the existing ones. This success has brought about a revolution in computer vision; ConvNets are now the dominant approach for almost all recognition and detection tasks4, 58, 59, 63, 64, 65 and approach human performance on some tasks. A recent stunning demonstration combines ConvNets and recurrent net modules for the generation of image captions (Fig. 3). Recent ConvNet architectures have 10 to 20 layers of ReLUs, hundreds of millions of weights, and billions of connections between units. Whereas training such large networks could have taken weeks only two years ago, progress in hardware, software and algorithm parallelization have reduced training times to a few hours. The performance of ConvNet-based vision systems has caused most major technology companies, including Google, Facebook, Microsoft, IBM, Yahoo!, Twitter and Adobe, as well as a quickly growing number of start-ups to initiate research and development projects and to deploy ConvNet-based image understanding products and services. ConvNets are easily amenable to efficient hardware implementations in chips or field-programmable gate arrays66, 67. A number of companies such as NVIDIA, Mobileye, Intel, Qualcomm and Samsung are developing ConvNet chips to enable real-time vision applications in smartphones, cameras, robots and self-driving cars. Deep-learning theory shows that deep nets have two different exponential advantages over classic learning algorithms that do not use distributed representations21. Both of these advantages arise from the power of composition and depend on the underlying data-generating distribution having an appropriate componential structure40. First, learning distributed representations enable generalization to new combinations of the values of learned features beyond those seen during training (for example, 2n combinations are possible with n binary features)68, 69. Second, composing layers of representation in a deep net brings the potential for another exponential advantage70 (exponential in the depth). The hidden layers of a multilayer neural network learn to represent the network's inputs in a way that makes it easy to predict the target outputs. This is nicely demonstrated by training a multilayer neural network to predict the next word in a sequence from a local context of earlier words71. Each word in the context is presented to the network as a one-of-N vector, that is, one component has a value of 1 and the rest are 0. In the first layer, each word creates a different pattern of activations, or word vectors (Fig. 4). In a language model, the other layers of the network learn to convert the input word vectors into an output word vector for the predicted next word, which can be used to predict the probability for any word in the vocabulary to appear as the next word. The network learns word vectors that contain many active components each of which can be interpreted as a separate feature of the word, as was first demonstrated27 in the context of learning distributed representations for symbols. These semantic features were not explicitly present in the input. They were discovered by the learning procedure as a good way of factorizing the structured relationships between the input and output symbols into multiple 'micro-rules'. Learning word vectors turned out to also work very well when the word sequences come from a large corpus of real text and the individual micro-rules are unreliable71. When trained to predict the next word in a news story, for example, the learned word vectors for Tuesday and Wednesday are very similar, as are the word vectors for Sweden and Norway. Such representations are called distributed representations because their elements (the features) are not mutually exclusive and their many configurations correspond to the variations seen in the observed data. These word vectors are composed of learned features that were not determined ahead of time by experts, but automatically discovered by the neural network. Vector representations of words learned from text are now very widely used in natural language applications14, 17, 72, 73, 74, 75, 76. The issue of representation lies at the heart of the debate between the logic-inspired and the neural-network-inspired paradigms for cognition. In the logic-inspired paradigm, an instance of a symbol is something for which the only property is that it is either identical or non-identical to other symbol instances. It has no internal structure that is relevant to its use; and to reason with symbols, they must be bound to the variables in judiciously chosen rules of inference. By contrast, neural networks just use big activity vectors, big weight matrices and scalar non-linearities to perform the type of fast 'intuitive' inference that underpins effortless commonsense reasoning. Before the introduction of neural language models71, the standard approach to statistical modelling of language did not exploit distributed representations: it was based on counting frequencies of occurrences of short symbol sequences of length up to N (called N-grams). The number of possible N-grams is on the order of VN, where V is the vocabulary size, so taking into account a context of more than a handful of words would require very large training corpora. N-grams treat each word as an atomic unit, so they cannot generalize across semantically related sequences of words, whereas neural language models can because they associate each word with a vector of real valued features, and semantically related words end up close to each other in that vector space (Fig. 4). When backpropagation was first introduced, its most exciting use was for training recurrent neural networks (RNNs). For tasks that involve sequential inputs, such as speech and language, it is often better to use RNNs (Fig. 5). RNNs process an input sequence one element at a time, maintaining in their hidden units a 'state vector' that implicitly contains information about the history of all the past elements of the sequence. When we consider the outputs of the hidden units at different discrete time steps as if they were the outputs of different neurons in a deep multilayer network (Fig. 5, right), it becomes clear how we can apply backpropagation to train RNNs. RNNs are very powerful dynamic systems, but training them has proved to be problematic because the backpropagated gradients either grow or shrink at each time step, so over many time steps they typically explode or vanish77, 78. Thanks to advances in their architecture79, 80 and ways of training them81, 82, RNNs have been found to be very good at predicting the next character in the text83 or the next word in a sequence75, but they can also be used for more complex tasks. For example, after reading an English sentence one word at a time, an English 'encoder' network can be trained so that the final state vector of its hidden units is a good representation of the thought expressed by the sentence. This thought vector can then be used as the initial hidden state of (or as extra input to) a jointly trained French 'decoder' network, which outputs a probability distribution for the first word of the French translation. If a particular first word is chosen from this distribution and provided as input to the decoder network it will then output a probability distribution for the second word of the translation and so on until a full stop is chosen17, 72, 76. Overall, this process generates sequences of French words according to a probability distribution that depends on the English sentence. This rather naive way of performing machine translation has quickly become competitive with the state-of-the-art, and this raises serious doubts about whether understanding a sentence requires anything like the internal symbolic expressions that are manipulated by using inference rules. It is more compatible with the view that everyday reasoning involves many simultaneous analogies that each contribute plausibility to a conclusion84, 85. Instead of translating the meaning of a French sentence into an English sentence, one can learn to 'translate' the meaning of an image into an English sentence (Fig. 3). The encoder here is a deep ConvNet that converts the pixels into an activity vector in its last hidden layer. The decoder is an RNN similar to the ones used for machine translation and neural language modelling. There has been a surge of interest in such systems recently (see examples mentioned in ref. 86). RNNs, once unfolded in time (Fig. 5), can be seen as very deep feedforward networks in which all the layers share the same weights. Although their main purpose is to learn long-term dependencies, theoretical and empirical evidence shows that it is difficult to learn to store information for very long78. To correct for that, one idea is to augment the network with an explicit memory. The first proposal of this kind is the long short-term memory (LSTM) networks that use special hidden units, the natural behaviour of which is to remember inputs for a long time79. A special unit called the memory cell acts like an accumulator or a gated leaky neuron: it has a connection to itself at the next time step that has a weight of one, so it copies its own real-valued state and accumulates the external signal, but this self-connection is multiplicatively gated by another unit that learns to decide when to clear the content of the memory. LSTM networks have subsequently proved to be more effective than conventional RNNs, especially when they have several layers for each time step87, enabling an entire speech recognition system that goes all the way from acoustics to the sequence of characters in the transcription. LSTM networks or related forms of gated units are also currently used for the encoder and decoder networks that perform so well at machine translation17, 72, 76. Over the past year, several authors have made different proposals to augment RNNs with a memory module. Proposals include the Neural Turing Machine in which the network is augmented by a 'tape-like' memory that the RNN can choose to read from or write to88, and memory networks, in which a regular network is augmented by a kind of associative memory89. Memory networks have yielded excellent performance on standard question-answering benchmarks. The memory is used to remember the story about which the network is later asked to answer questions. Beyond simple memorization, neural Turing machines and memory networks are being used for tasks that would normally require reasoning and symbol manipulation. Neural Turing machines can be taught 'algorithms'. Among other things, they can learn to output a sorted list of symbols when their input consists of an unsorted sequence in which each symbol is accompanied by a real value that indicates its priority in the list88. Memory networks can be trained to keep track of the state of the world in a setting similar to a text adventure game and after reading a story, they can answer questions that require complex inference90. In one test example, the network is shown a 15-sentence version of the The Lord of the Rings and correctly answers questions such as “where is Frodo now?”89. Unsupervised learning91, 92, 93, 94, 95, 96, 97, 98 had a catalytic effect in reviving interest in deep learning, but has since been overshadowed by the successes of purely supervised learning. Although we have not focused on it in this Review, we expect unsupervised learning to become far more important in the longer term. Human and animal learning is largely unsupervised: we discover the structure of the world by observing it, not by being told the name of every object. Human vision is an active process that sequentially samples the optic array in an intelligent, task-specific way using a small, high-resolution fovea with a large, low-resolution surround. We expect much of the future progress in vision to come from systems that are trained end-to-end and combine ConvNets with RNNs that use reinforcement learning to decide where to look. Systems combining deep learning and reinforcement learning are in their infancy, but they already outperform passive vision systems99 at classification tasks and produce impressive results in learning to play many different video games100. Natural language understanding is another area in which deep learning is poised to make a large impact over the next few years. We expect systems that use RNNs to understand sentences or whole documents will become much better when they learn strategies for selectively attending to one part at a time76, 86. Ultimately, major progress in artificial intelligence will come about through systems that combine representation learning with complex reasoning. Although deep learning and simple reasoning have been used for speech and handwriting recognition for a long time, new paradigms are needed to replace rule-based manipulation of symbolic expressions by operations on large vectors101. Download references The authors would like to thank the Natural Sciences and Engineering Research Council of Canada, the Canadian Institute For Advanced Research (CIFAR), the National Science Foundation and Office of Naval Research for support. Y.L. and Y.B. are CIFAR fellows. Reprints and permissions information is available at www.nature.com/reprints.
Information spreading in stationary Markovian evolving graphs Markovian evolving graphs [2] are dynamic-graph models where the links among a fixed set of nodes change during time according to an arbitrary Markovian rule. They are extremely general and they can well describe important dynamic-network scenarios.
Approximately bisimilar symbolic models for nonlinear control systems Control systems are usually modeled by differential equations describing how physical phenomena can be influenced by certain control parameters or inputs. Although these models are very powerful when dealing with physical phenomena, they are less suited to describe software and hardware interfacing with the physical world. For this reason there is a growing interest in describing control systems through symbolic models that are abstract descriptions of the continuous dynamics, where each ''symbol'' corresponds to an ''aggregate'' of states in the continuous model. Since these symbolic models are of the same nature of the models used in computer science to describe software and hardware, they provide a unified language to study problems of control in which software and hardware interact with the physical world. Furthermore, the use of symbolic models enables one to leverage techniques from supervisory control and algorithms from game theory for controller synthesis purposes. In this paper we show that every incrementally globally asymptotically stable nonlinear control system is approximately equivalent (bisimilar) to a symbolic model. The approximation error is a design parameter in the construction of the symbolic model and can be rendered as small as desired. Furthermore, if the state space of the control system is bounded, the obtained symbolic model is finite. For digital control systems, and under the stronger assumption of incremental input-to-state stability, symbolic models can be constructed through a suitable quantization of the inputs.
A 60-GHz 16QAM/8PSK/QPSK/BPSK Direct-Conversion Transceiver for IEEE802.15.3c. This paper presents a 60-GHz direct-conversion transceiver using 60-GHz quadrature oscillators. The transceiver has been fabricated in a standard 65-nm CMOS process. It in cludes a receiver with a 17.3-dB conversion gain and less than 8.0-dB noise figure, a transmitter with a 18.3-dB conversion gain, a 9.5-dBm output 1 dB compression point, a 10.9-dBm saturation output power and 8.8-% power added ...
20.3 A feedforward controlled on-chip switched-capacitor voltage regulator delivering 10W in 32nm SOI CMOS On-chip (or fully integrated) switched-capacitor (SC) voltage regulators (SCVR) have recently received a lot of attention due to their ease of monolithic integration. The use of deep trench capacitors can lead to SCVR implementations that simultaneously achieve high efficiency, high power density, and fast response time. For the application of granular power distribution of many-core microprocessor systems, the on-chip SCVR must maintain an output voltage above a certain minimum level Uout, min in order for the microprocessor core to meet setup time requirements. Following a transient load change, the output voltage typically exhibits a droop due to parasitic inductances and resistances in the power distribution network. Therefore, the steady-state output voltage is kept high enough to ensure VOUT >Vout, min at all times, thereby introducing an output voltage overhead that leads to increased system power consumption. The output voltage droop can be reduced by implementing fast regulation and a sufficient amount of on-chip decoupling capacitance. However, a large amount of on-chip decoupling capacitance is needed to significantly reduce the droop, and it becomes impractical to implement owing to the large chip area overhead required. This paper presents a feedforward control scheme that significantly reduces the output voltage droop in the presence of a large input voltage droop following a transient event. This in turn reduces the required output voltage overhead and may lead to significant overall system power savings.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.11
0.11
0.11
0.11
0.1
0.08
0.02
0.000714
0
0
0
0
0
0
Achievable rates in cognitive radio channels Cognitive radio promises a low-cost, highly flexible alternative to the classic single-frequency band, single-protocol wireless device. By sensing and adapting to its environment, such a device is able to fill voids in the wireless spectrum and can dramatically increase spectral efficiency. In this paper, the cognitive radio channel is defined as a two-sender, two-receiver interference channel in which sender 2 obtains the encoded message sender 1 plans to transmit. We consider two cases: in the genie-aided cognitive radio channel, sender 2 is noncausally presented the data to be transmitted by sender 1 while in the causal cognitive radio channel, the data is obtained causally. The cognitive radio at sender 2 may then choose to transmit simultaneously over the same channel, as opposed to waiting for an idle channel as is traditional for a cognitive radio. Our main result is the development of an achievable region which combines Gel'fand-Pinkser coding with an achievable region construction for the interference channel. In the additive Gaussian noise case, this resembles dirty-paper coding, a technique used in the computation of the capacity of the Gaussian multiple-input multiple-output (MIMO) broadcast channel. Numerical evaluation of the region in the Gaussian noise case is performed, and compared to an inner bound, the interference channel, and an outer bound, a modified Gaussian MIMO broadcast channel. Results are also extended to the case in which the message is causally obtained.
Error Exponents for Asymmetric Two-User Discrete Memoryless Source-Channel Systems Consider transmitting two discrete memoryless correlated sources, consisting of a common and a private source, over a discrete memoryless multi-terminal channel with two transmitters and two receivers. At the transmitter side, the common source is observed by both encoders but the private source can only be accessed by one encoder. At the receiver side, both decoders need to reconstruct the common source, but only one decoder needs to reconstruct the private source. We hence refer to this system by the asymmetric 2-user source-channel system. In this work, we derive a universally achievable joint source-channel coding (JSCC) error exponent pair for the 2-user system by using a technique which generalizes Csiszar's method (1980) for the point- to-point (single-user) discrete memoryless source-channel system. We next investigate the largest convergence rate of asymptotic exponential decay of the system (overall) probability of erroneous transmission, i.e., the system JSCC error exponent. We obtain lower and upper bounds for the exponent. As a consequence, we establish the JSCC theorem with single letter characterization.
Cognitive radio: brain-empowered wireless communications Cognitive radio is viewed as a novel approach for improving the utilization of a precious natural resource: the radio electromagnetic spectrum. The cognitive radio, built on a software-defined radio, is defined as an intelligent wireless communication system that is aware of its environment and uses the methodology of understanding-by-building to learn from the environment and adapt to statistical variations in the input stimuli, with two primary objectives in mind: · highly reliable communication whenever and wherever needed; · efficient utilization of the radio spectrum. Following the discussion of interference temperature as a new metric for the quantification and management of interference, the paper addresses three fundamental cognitive tasks. 1) Radio-scene analysis. 2) Channel-state estimation and predictive modeling. 3) Transmit-power control and dynamic spectrum management. This work also discusses the emergent behavior of cognitive radio.
The Impact of Data Aggregation in Wireless Sensor Networks Sensor networks are distributed event-based systems that differ from traditional communication networks in several ways: sensor networks have severe energy constraints, redundant low-rate data, and many-to-one flows. Data-centric mechanisms that perform in-network aggregation of data are needed in this setting for energy-efficient information flow. In this paper we model data-centric routing and compare its performance with traditional end-to-endrouting schemes. We examine the impact of source-destination placement and communication network density on the energy costs and delay associated with data aggregation. We show that data-centric routing offers significant performance gains across a wide range of operational scenarios. We also examine the complexity of optimal data aggregation, showing that although it is an NP-hard problem in general, there exist useful polynomial-time special cases.
The software radio architecture As communications technology continues its rapid transition from analog to digital, more functions of contemporary radio systems are implemented in software, leading toward the software radio. This article provides a tutorial review of software radio architectures and technology, highlighting benefits, pitfalls, and lessons learned. This includes a closer look at the canonical functional partitioning of channel coding into antenna, RF, IF, baseband, and bitstream segments. A more detailed look at the estimation of demand for critical resources is key. This leads to a discussion of affordable hardware configurations, the mapping of functions to component hardware, and related software tools. This article then concludes with a brief treatment of the economics and likely future directions of software radio technology
Energy-Efficient Communication Protocol for Wireless Microsensor Networks Wireless distributed micro-sensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks.Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multihop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster base stations (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic net-works, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show that LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional routing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.
Analysis of timing jitter in CMOS ring oscillators in this paper the effects of thermal noise in transistors on timing jitter in CMOS ring-oscillators composed of source-coupled differential resistively-loaded delay cells is investigated. The relationship between delay element design parameters and the inherent thermal noise-induced jitter of the generated waveform are analyzed. These results are compared with simulated results from a Monte-Carlo analysis with good agreement. The analysis shows that timing jitter is inversely proportional to the square root of the total capacitance at the output of each inverter, and inversely proportional to the gate-source bias voltage above threshold of the source-coupled devices in the balanced state. Furthermore, these dependencies imply an inverse relationship between jitter and power consumption for an oscillator with fixed output period. Phase noise and timing jitter performance are predicted to improve at a rate of 10 dB per decade increase in power consumption
Tensor Decompositions and Applications This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or $N$-way array. Decompositions of higher-order tensors (i.e., $N$-way arrays with $N \geq 3$) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors.
Networks of spiking neurons: the third generation of neural network models The computational power of formal models for networks of spiking neurons is compared with that of other neural network models based on McCulloch Pitts neurons (i.e., threshold gates), respectively, sigmoidal gates. In particular it is shown that networks of spiking neurons are, with regard to the number of neurons that are needed, computationally more powerful than these other neural network models. A concrete biologically relevant function is exhibited which can be computed by a single spiking neuron (for biologically reasonable values of its parameters), but which requires hundreds of hidden units on a sigmoidal neural net. On the other hand, it is known that any function that can be computed by a small sigmoidal neural net can also be computed by a small network of spiking neurons. This article does not assume prior knowledge about spiking neurons, and it contains an extensive list of references to the currently available literature on computations in networks of spiking neurons and relevant results from neurobiology.
A world survey of artificial brain projects, Part I: Large-scale brain simulations Driven by rapid ongoing advances in computer hardware, neuroscience and computer science, Artificial Brain research and development are blossoming. This article constitutes the first half of a two-part world survey of artificial brain projects: this part dealing with large-scale brain simulations, and the second part with biologically inspired cognitive architectures (BICAs). The large-scale brain simulations we consider in depth here include those by Markram, Modha, Boahen, Horwitz, Edelman, Izhikevich, and Just. As well as reviewing the particulars of these simulation projects, we position them in a broader perspective, comparing at the different underlying definitions of the concept of ''simulation,'' noting that in many ways the projects are modeling neurosystems at different levels as well as using different methodologies.
Phoenix: Detecting and Recovering from Permanent Processor Design Bugs with Programmable Hardware Although processor design verification consumes ever-increasing resources, many design defects still slip into production silicon. In a few cases, such bugs have caused expensive chip recalls. To truly improve productivity, hardware bugs should be handled like system software ones, with vendors periodically releasing patches to fix hardware in the field. Based on an analysis of serious design defects in current AMD, Intel, IBM, and Motorola processors, this paper proposes and evaluates Phoenix -- novel field-programmable on-chip hardware that detects and recovers from design defects. Phoenix taps key logic signals and, based on downloaded defect signatures, combines the signals into conditions that flag defects. On defect detection, Phoenix flushes the pipeline and either retries or invokes a customized recovery handler. Phoenix induces negligible slowdown, while adding only 0.05% area and 0.48% wire overheads. Phoenix detects all the serious defects that are triggered by concurrent control signals. Moreover, it recovers from most of them, and simplifies recovery for the rest. Finally, we present an algorithm to automatically size Phoenix for new processors.
A 0.5 V 1.1 MS/sec 6.3 fJ/Conversion-Step SAR-ADC With Tri-Level Comparator in 40 nm CMOS This paper presents an extremely low-voltage operation and power efficient successive-approximation-register (SAR) analog-to-digital converter (ADC). Tri-level comparator is proposed to relax the speed requirement of the comparator and decrease the resolution of internal Digital-to-Analog Converter (DAC) by 1-bit. The internal charge redistribution DAC employs unit capacitance of 0.5 fF and ADC operates at nearly thermal noise limitation. To deal with the problem of capacitor mismatch, reconfigurable capacitor array and calibration procedure were developed. The prototype ADC fabricated using 40 nm CMOS process achieves 46.8 dB SNDR and 58.2 dB SFDR with 1.1 MS/sec at 0.5 V power supply. The FoM is 6.3-fJ/conversion step and the chip die area is only 160 μm × 70 μm.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.026496
0.023108
0.004604
0.00275
0.000673
0.000008
0
0
0
0
0
0
0
0
8T SRAM Cell as a Multibit Dot-Product Engine for Beyond Von Neumann Computing Large-scale digital computing almost exclusively relies on the von Neumann architecture, which comprises separate units for storage and computations. The energy-expensive transfer of data from the memory units to the computing cores results in the well-known von Neumann bottleneck. Various approaches aimed toward bypassing the von Neumann bottleneck are being extensively explored in the literature. These include in-memory computing based on CMOS and beyond CMOS technologies, wherein by making modifications to the memory array, vector computations can be carried out as close to the memory units as possible. Interestingly, in-memory techniques based on CMOS technology are of special importance due to the ubiquitous presence of field-effect transistors and the resultant ease of large-scale manufacturing and commercialization. On the other hand, perhaps the most important computation required for applications such as machine learning, etc., comprises the dot-product operation. Emerging nonvolatile memristive technologies have been shown to be very efficient in computing analog dot products in an <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in situ</italic> fashion. The memristive analog computation of the dot product results in much faster operation as opposed to digital vector in-memory bitwise Boolean computations. However, challenges with respect to large-scale manufacturing coupled with the limited endurance of memristors have hindered rapid commercialization of memristive-based computing solutions. In this paper, we show that the standard 8 transistor (8T) digital SRAM array can be configured as an analoglike in-memory multibit dot-product engine (DPE). By applying appropriate analog voltages to the read ports of the 8T SRAM array and sensing the output current, an approximate analog–digital DPE can be implemented. We present two different configurations for enabling multibit dot-product computations in the 8T SRAM cell array, without modifying the standard bit-cell structure. We also demonstrate the robustness of the present proposal in presence of nonidealities such as the effect of line resistances and transistor threshold voltage variations. Since our proposal preserves the standard 8T-SRAM array structure, it can be used as a storage element with standard read–write instructions and also as an on-demand analoglike dot-product accelerator.
Reduction and IR-drop compensations techniques for reliable neuromorphic computing systems Neuromorphic computing system (NCS) is a promising architecture to combat the well-known memory bottleneck in Von Neumann architecture. The recent breakthrough on memristor devices made an important step toward realizing a low-power, small-footprint NCS on-a-chip. However, the currently low manufacturing reliability of nano-devices and the voltage IR-drop along metal wires and memristors arrays severely limits the scale of memristor crossbar based NCS and hinders the design scalability. In this work, we propose a novel system reduction scheme that significantly lowers the required dimension of the memristor crossbars in NCS while maintaining high computing accuracy. An IR-drop compensation technique is also proposed to overcome the adverse impacts of the wire resistance and the sneak-path problem in large memristor crossbar designs. Our simulation results show that the proposed techniques can improve computing accuracy by 27.0% and 38.7% less circuit area compared to the original NCS design.
Spin-Transfer Torque Memories: Devices, Circuits, and Systems. Spin-transfer torque magnetic memory (STT-MRAM) has gained significant research interest due to its nonvolatility and zero standby leakage, near unlimited endurance, excellent integration density, acceptable read and write performance, and compatibility with CMOS process technology. However, several obstacles need to be overcome for STT-MRAM to become the universal memory technology. This paper fi...
Input-Splitting of Large Neural Networks for Power-Efficient Accelerator with Resistive Crossbar Memory Array Resistive Crossbar memory Arrays (RCA) have been gaining interest as a promising platform to implement Convolutional Neural Networks (CNN). One of the major challenges in RCA-based design is that the number of rows in an RCA is often smaller than the number of input neurons in a layer. Previous works used high-resolution Analog-to-Digital Converters (ADCs) to compute the partial weighted sum in each array and merged partial sums from multiple arrays outside the RCAs. However, such approach suffers from significant power consumption due to the need for high-resolution ADCs. In this paper, we propose a methodology to more efficiently construct a large CNN with multiple RCAs. By splitting the input feature map and retraining the CNN with proper initialization, we demonstrate that any CNN model can be represented with multiple arrays without using intermediate partial sums. The experimental results show that the ADC power of the proposed design is 32x smaller and the total chip power of the proposed design is 3x smaller than those of the baseline design.
RecSSD: near data processing for solid state drive based recommendation inference ABSTRACTNeural personalized recommendation models are used across a wide variety of datacenter applications including search, social media, and entertainment. State-of-the-art models comprise large embedding tables that have billions of parameters requiring large memory capacities. Unfortunately, large and fast DRAM-based memories levy high infrastructure costs. Conventional SSD-based storage solutions offer an order of magnitude larger capacity, but have worse read latency and bandwidth, degrading inference performance. RecSSD is a near data processing based SSD memory system customized for neural recommendation inference that reduces end-to-end model inference latency by 2× compared to using COTS SSDs across eight industry-representative models.
Design Tools for Resistive Crossbar based Machine Learning Accelerators Resistive crossbar based accelerators for Machine Learning (ML) have attracted great interest as they offer the prospect of high density on-chip storage as well as efficient in-memory matrix-vector multiplication (MVM) operations. Despite their promises, they present several design challenges, such as high write costs, overhead of analog-to-digital and digital-to-analog converters and other periph...
A Mixed-Signal Binarized Convolutional-Neural-Network Accelerator Integrating Dense Weight Storage and Multiplication for Reduced Data Movement We present a 65nm CMOS mixed-signal accelerator for first and hidden layers of binarized CNNs. Hidden layers support up to 512, 3 ×3 ×512 binary - input filters, and first layers support up to 64, 3×3 ×3 analog-input filters. Weight storage and multiplication with input activations is achieved within compact hardware, only 1.8 × larger than a 6T SRAM bit cell, and output activations are computed via capacitive charge sharing, requiring distribution of only a switch-control signal. Reduced data movement gives energy-efficiency of 658 (binary) / 0.95 TOPS/W and throughput of 9438 (binary) / 10.64 GOPS for hidden / first layers.
ImageNet Classification with Deep Convolutional Neural Networks. We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0%, respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.
Theory and Applications of Robust Optimization In this paper we survey the primary research, both theoretical and applied, in the area of robust optimization (RO). Our focus is on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying prominent theoretical results of RO, we also present some recent results linking RO to adaptable models for multistage decision-making problems. Finally, we highlight applications of RO across a wide spectrum of domains, including finance, statistics, learning, and various areas of engineering.
On The Advantages of Tagged Architecture This paper proposes that all data elements in a computer memory be made to be self-identifying by means of a tag. The paper shows that the advantages of the change from the traditional von Neumann machine to tagged architecture are seen in all software areas including programming systems, operating systems, debugging systems, and systems of software instrumentation. It discusses the advantages that accrue to the hardware designer in the implementation and gives examples for large- and small-scale systems. The economic costs of such an implementation for a minicomputer system are examined. The paper concludes that such a machine architecture may well be a suitable replacement for the traditional von Neumann architecture.
A dynamic analysis of the Dickson charge pump circuit Dynamics of the Dickson charge pump circuit are analyzed. The analytical results enable the estimation of the rise time of the output voltage and that of the power consumption during boosting. By using this analysis, the optimum number of stages to minimize the rise time has been estimated as 1.4 N/sub min/, where N/sub min/ is the minimum value of the number of stages necessary for a given parame...
Efficient dithering in MASH sigma-delta modulators for fractional frequency synthesizers The digital multistage-noise-shaping (MASH) ΣΔ modulators used in fractional frequency synthesizers are prone to spur tone generation in their output spectrum. In this paper, the state of the art on spur-tone-magnitude reduction is used to demonstrate that an M-bit MASH architecture dithered by a simple M-bit linear feedback shift register (LFSR) can be as effective as more sophisticated topologies if the dither signal is properly added. A comparison between the existent digital ΣΔ modulators used in fractional synthesizers is presented to demonstrate that the MASH architecture has the best tradeoff between complexity and quantization noise shaping, but they present spur tones. The objective of this paper was to significantly decrease the area of the circuit used to reduce the spur tone magnitude for these MASH topologies. The analysis is validated with a theoretical study of the paths where the dither signal can be added. Experimental results of a digital M-bit MASH 1-1-1 ΣΔ modulator with the proposed way to add the LFSR dither are presented to make a hardware comparison.
Wireless sensing and vibration control with increased redundancy and robustness design. Control systems with long distance sensor and actuator wiring have the problem of high system cost and increased sensor noise. Wireless sensor network (WSN)-based control systems are an alternative solution involving lower setup and maintenance costs and reduced sensor noise. However, WSN-based control systems also encounter problems such as possible data loss, irregular sampling periods (due to the uncertainty of the wireless channel), and the possibility of sensor breakdown (due to the increased complexity of the overall control system). In this paper, a wireless microcontroller-based control system is designed and implemented to wirelessly perform vibration control. The wireless microcontroller-based system is quite different from regular control systems due to its limited speed and computational power. Hardware, software, and control algorithm design are described in detail to demonstrate this prototype. Model and system state compensation is used in the wireless control system to solve the problems of data loss and sensor breakdown. A positive position feedback controller is used as the control law for the task of active vibration suppression. Both wired and wireless controllers are implemented. The results show that the WSN-based control system can be successfully used to suppress the vibration and produces resilient results in the presence of sensor failure.
Robust Biopotential Acquisition via a Distributed Multi-Channel FM-ADC. This contribution presents an active electrode system for biopotential acquisition using a distributed multi-channel FM-modulated analog front-end and ADC architecture. Each electrode captures one biopotential signal and converts to a frequency modulated signal using a VCO tuned to a unique frequency. Each electrode then buffers its output onto a shared analog line that aggregates all of the FM-mo...
1.0525
0.05
0.05
0.05
0.05
0.05
0.0175
0.001351
0
0
0
0
0
0
Efficient FPGA Implementations of Pair and Triplet-Based STDP for Neuromorphic Architectures Synaptic plasticity is envisioned to bring about learning and memory in the brain. Various plasticity rules have been proposed, among which spike-timing-dependent plasticity (STDP) has gained the highest interest across various neural disciplines, including neuromorphic engineering. Here, we propose highly efficient digital implementations of pair-based STDP (PSTDP) and triplet-based STDP (TSTDP) on field programmable gate arrays that do not require dedicated floating-point multipliers and hence need minimal hardware resources. The implementations are verified by using them to replicate a set of complex experimental data, including those from pair, triplet, quadruplet, frequency-dependent pairing, as well as Bienenstock–Cooper–Munro experiments. We demonstrate that the proposed TSTDP design has a higher operating frequency that leads to <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.46\times $ </tex-math></inline-formula> faster weight adaptation (learning) and achieves 11.55 folds improvement in resource usage, compared to a recent implementation of a calcium-based plasticity rule capable of exhibiting similar learning performance. In addition, we show that the proposed PSTDP and TSTDP designs, respectively, consume <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.38\times $ </tex-math></inline-formula> and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1.78\times $ </tex-math></inline-formula> less resources than the most efficient PSTDP implementation in the literature. As a direct result of the efficiency and powerful synaptic capabilities of the proposed learning modules, they could be integrated into large-scale digital neuromorphic architectures to enable high-performance STDP learning.
From few to many: illumination cone models for face recognition under variable lighting and pose We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render驴or synthesize驴images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone (based on Euclidean distance within the image space). We test our face recognition method on 4,050 images from the Yale Face Database B; these images contain 405 viewing conditions (9 poses $\times$ 45 illumination conditions) for 10 individuals. The method performs almost without error, except on the most extreme lighting directions, and significantly outperforms popular recognition methods that do not use a generative model.
Energy efficient parallel neuromorphic architectures with approximate arithmetic on FPGA. In this paper, we present the parallel neuromorphic processor architectures for spiking neural networks on FPGA. The proposed architectures address several critical issues pertaining to efficient parallelization of the update of membrane potentials, on-chip storage of synaptic weights and integration of approximate arithmetic units. The trade-offs between throughput, hardware cost and power overheads for different configurations are thoroughly investigated. Notably, for the application of handwritten digit recognition, a promising training speedup of 13.5x and a recognition speedup of 25.8x are achieved by a parallel implementation whose degree of parallelism is 32. In spite of the 120MHz operating frequency, the 32-way parallel hardware design demonstrates a 59.4x training speedup over the single-thread software program running on a 2.2GHz general purpose CPU. Equally importantly, by leveraging the built-in resilience of the neuromorphic architecture we demonstrate the energy benefit resulted from the use of approximate arithmetic computation. Up to 20% improvement in energy consumption is achieved by integrating approximate multipliers into the system while maintaining almost the same level of recognition rate achieved using standard multipliers. To the best of our knowledge, it is the first time that the approximate computing and parallel processing are applied to FPGA based spiking neural networks. The influence of the parallel processing on the benefits of approximate computing is also discussed in detail.
Scalable Digital Neuromorphic Architecture for Large-Scale Biophysically Meaningful Neural Network With Multi-Compartment Neurons. Multicompartment emulation is an essential step to enhance the biological realism of neuromorphic systems and to further understand the computational power of neurons. In this paper, we present a hardware efficient, scalable, and real-time computing strategy for the implementation of large-scale biologically meaningful neural networks with one million multi-compartment neurons (CMNs). The hardware platform uses four Altera Stratix III field-programmable gate arrays, and both the cellular and the network levels are considered, which provides an efficient implementation of a large-scale spiking neural network with biophysically plausible dynamics. At the cellular level, a cost-efficient multi-CMN model is presented, which can reproduce the detailed neuronal dynamics with representative neuronal morphology. A set of efficient neuromorphic techniques for single-CMN implementation are presented with all the hardware cost of memory and multiplier resources removed and with hardware performance of computational speed enhanced by 56.59% in comparison with the classical digital implementation method. At the network level, a scalable network-on-chip (NoC) architecture is proposed with a novel routing algorithm to enhance the NoC performance including throughput and computational latency, leading to higher computational efficiency and capability in comparison with state-of-the-art projects. The experimental results demonstrate that the proposed work can provide an efficient model and architecture for large-scale biologically meaningful networks, while the hardware synthesis results demonstrate low area utilization and high computational speed that supports the scalability of the approach.
Efficient Design of Spiking Neural Network With STDP Learning Based on Fast CORDIC In emerging Spiking Neural Network (SNN) based neuromorphic hardware design, energy efficiency and on-line learning are attractive advantages mainly contributed by bio-inspired local learning with nonlinear dynamics and at the cost of associated hardware complexity. This paper presents a novel SNN design employing fast COordinate Rotation DIgital Computer (CORDIC) algorithm to achieve fast spike t...
Application of Deep Compression Technique in Spiking Neural Network Chip. In this paper, a reconfigurable and scalable spiking neural network processor, containing 192 neurons and 6144 synapses, is developed. By using deep compression technique in spiking neural network chip, the amount of physical synapses can be reduced to 1/16 of that needed in the original network, while the accuracy is maintained. This compression technique can greatly reduce the number of SRAMs inside the chip as well as the power consumption of the chip. This design achieves throughput per unit area of 1.1 GSOP/( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\text{s}\!\cdot\!\text{mm}^2$</tex-math></inline-formula> ) at 1.2 V, and energy consumed per SOP of 35 pJ. A 2-layer fully-connected spiking neural network is mapped to the chip, and thus the chip is able to realize handwritten digit recognition on MNIST with an accuracy of 91.2%.
Impossibility of distributed consensus with one faulty process The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the “Byzantine Generals” problem.
A fast and elitist multiobjective genetic algorithm: NSGA-II Multi-objective evolutionary algorithms (MOEAs) that use non-dominated sorting and sharing have been criticized mainly for: (1) their O(MN3) computational complexity (where M is the number of objectives and N is the population size); (2) their non-elitism approach; and (3) the need to specify a sharing parameter. In this paper, we suggest a non-dominated sorting-based MOEA, called NSGA-II (Non-dominated Sorting Genetic Algorithm II), which alleviates all of the above three difficulties. Specifically, a fast non-dominated sorting approach with O(MN2) computational complexity is presented. Also, a selection operator is presented that creates a mating pool by combining the parent and offspring populations and selecting the best N solutions (with respect to fitness and spread). Simulation results on difficult test problems show that NSGA-II is able, for most problems, to find a much better spread of solutions and better convergence near the true Pareto-optimal front compared to the Pareto-archived evolution strategy and the strength-Pareto evolutionary algorithm - two other elitist MOEAs that pay special attention to creating a diverse Pareto-optimal front. Moreover, we modify the definition of dominance in order to solve constrained multi-objective problems efficiently. Simulation results of the constrained NSGA-II on a number of test problems, including a five-objective, seven-constraint nonlinear problem, are compared with another constrained multi-objective optimizer, and the much better performance of NSGA-II is observed
Language-based information-flow security Current standard security practices do not provide substantial assurance that the end-to-end behavior of a computing system satisfies important security policies such as confidentiality. An end-to-end confidentiality policy might assert that secret input data cannot be inferred by an attacker through the attacker's observations of system output; this policy regulates information flow. Conventional security mechanisms such as access control and encryption do not directly address the enforcement of information-flow policies. Previously, a promising new approach has been developed: the use of programming-language techniques for specifying and enforcing information-flow policies. In this paper, we survey the past three decades of research on information-flow security, particularly focusing on work that uses static program analysis to enforce information-flow policies. We give a structured view of work in the area and identify some important open challenges.
Fundamental control algorithms in mobile networks In this work we propose simple and efficient protocols forcounting and leader election in mobile networks. For mobilenetworks with fixed base stations we provide a new andvery efficient protocol for counting the number of mobilehosts. The main part of the work concentrates on ad-hocnetworks (no fixed subnetwork). We provide a model forthese networks and leader election (and a special form ofcounting) protocols for both named and anonymous mobilehosts. In this work we define two protocol ...
Bundled execution of recurring traces for energy-efficient general purpose processing Technology scaling has delivered on its promises of increasing device density on a single chip. However, the voltage scaling trend has failed to keep up, introducing tight power constraints on manufactured parts. In such a scenario, there is a need to incorporate energy-efficient processing resources that can enable more computation within the same power budget. Energy efficiency solutions in the past have typically relied on application specific hardware and accelerators. Unfortunately, these approaches do not extend to general purpose applications due to their irregular and diverse code base. Towards this end, we propose BERET, an energy-efficient co-processor that can be configured to benefit a wide range of applications. Our approach identifies recurring instruction sequences as phases of "temporal regularity" in a program's execution, and maps suitable ones to the BERET hardware, a three-stage pipeline with a bundled execution model. This judicious off-loading of program execution to a reduced-complexity hardware demonstrates significant savings on instruction fetch, decode and register file accesses energy. On average, BERET reduces energy consumption by a factor of 3-4X for the program regions selected across a range of general-purpose and media applications. The average energy savings for the entire application run was 35% over a single-issue in-order processor.
A dynamically reconfigurable system-on-a-chip architecture for future mobile digital signal processing The evolving of current and future broadband access techniques into the wireless domain introduces new and flexible network architectures with difficult and interesting challenges. The system designers are faced with a challenging set of problems that stem from access mechanisms, energy conservation, error rate, transmission speed characteristics of the wireless links and mobility aspects. This paper presents first the major challenges in realizing flexible microelectronic system solutions for digital baseband signal processing in future mobile communication applications. Based thereupon, the architecture design of flexible system-on-a-chip solutions is discussed. The focus of the paper is the introduction of a new parallel and dynamically reconfigurable hardware architecture tailored to this application area. Its performance issues and potential are discussed by the implementation of a flexible and computation-intensive component of future mobile terminals.
NDC: Analyzing the impact of 3D-stacked memory+logic devices on MapReduce workloads While Processing-in-Memory has been investigated for decades, it has not been embraced commercially. A number of emerging technologies have renewed interest in this topic. In particular, the emergence of 3D stacking and the imminent release of Micron's Hybrid Memory Cube device have made it more practical to move computation near memory. However, the literature is missing a detailed analysis of a killer application that can leverage a Near Data Computing (NDC) architecture. This paper focuses on in-memory MapReduce workloads that are commercially important and are especially suitable for NDC because of their embarrassing parallelism and largely localized memory accesses. The NDC architecture incorporates several simple processing cores on a separate, non-memory die in a 3D-stacked memory package; these cores can perform Map operations with efficient memory access and without hitting the bandwidth wall. This paper describes and evaluates a number of key elements necessary in realizing efficient NDC operation: (i) low-EPI cores, (ii) long daisy chains of memory devices, (iii) the dynamic activation of cores and SerDes links. Compared to a baseline that is heavily optimized for MapReduce execution, the NDC design yields up to 15X reduction in execution time and 18X reduction in system energy.
A 1V 3.5 μW Bio-AFE With Chopper-Capacitor-Chopper Integrator-Based DSL and Low Power GM-C Filter This brief presents a low-noise, low-power bio-signal acquisition analog front-end (Bio-AFE). It mainly includes a capacitively coupled chopper-stabilized instrumentation amplifier (CCIA), a programmable gain amplifier (PGA), a low-pass filter (LPF), and a successive approximation analog to digital converter (SAR ADC). A chopper-capacitor-chopper integrator based DC servo loop (C3IB-DSL...
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
Memoryless Approach to the LQ and LQG Problems with Variable Input Delay This note studies the LQ and LQG problems for linear time invariant systems with a single time-varying input delay and instantaneous (memoryless) state feedback. We extend the memoryless state feedback solution proposed in [1] in two directions. We prove that in the deterministic case a memoryless state feedback can be in general optimal only up to a certain delay, for which we provide a sufficient, and sometimes strict, bound. Moreover, we show that this memoryless control is optimal also in the case of time-varying delays and that the quadratic cost functional has the same value as in the case without delay. For time varying delays the control law requires that the relationship between time points in which the input is generated and applied is known and invertible even if the delay function needs not to be differentiable or even continuous. Finally, we prove that the cost functional is bounded also in the stochastic case for the same delay interval as in the deterministic case, but with a larger cost than the delay-less LQG solution.
Robust compensation of a chattering time-varying input delay We investigate the design of a prediction-based controller for a linear system subject to a time-varying input delay, not necessarily causal. This means that the information feeding the system can be older than ones previously received. We propose to use the current delay value in the prediction employed in the control law. Modeling the input delay as a transport Partial Differential Equation, we prove asymptotic tracking of the system state, providing that the average ℒ2-norm of the delay time-derivative is sufficiently small. This result is obtained by generalizing Halanay inequality to time-varying differential inequalities.
Robustness of Adaptive Control under Time Delays for Three-Dimensional Curve Tracking. We analyze the robustness of a class of controllers that enable three-dimensional curve tracking by a free moving particle. The free particle tracks the closest point on the curve. By building a strict Lyapunov function and robustly forward invariant sets, we show input-to-state stability under predictable tolerance and safety bounds that guarantee robustness under control uncertainty, input delays, and a class of polygonal state constraints, including adaptive tracking and parameter identification under unknown control gains. Such an understanding may provide certified performance when the control laws are applied to real-life systems.
Optimal control of linear systems with large and variable input delays This paper proposes an optimal control law for linear systems affected by input delays. Specifically we prove that when the delay functions are known it is possible to generate the optimal control for arbitrarily large delay values by using a DDE without distributed terms. The solution can be seen as a chain of predictors whose size depends on the maximum delay.
Remote Stabilization Via Communication Networks With a Distributed Control Law In this paper we investigate the problem of remote stabilization via communication networks involving some time- varying delays of known average dynamics. This problem arises when the control law is remotely implemented and leads to the problem of stabilizing an open-loop unstable system with time-varying delay. We use a time-varying horizon predictor to design a stabilizing control law that sets the poles of the closed-loop system. The computation of the horizon of the predictor is investigated and the proposed control law explicitly takes into account an estimation of the average delay dynamics. The resulting closed loop system robustness with respect to some uncertainties on the delay estimation is also considered. Simulation results are finally presented. Index Terms— Networked control systems, stabilization with time-varying delays, state predictor.
Reduction Model Approach for Linear Time-Varying Systems With Delays. We study stabilization problems for time-varying linear systems with constant input delays. Our reduction method ensures input-to-state stability with respect to additive uncertainties, under arbitrarily long delays. It applies to rapidly time-varying systems, and gives a lower bound on the admissible rapidness parameters. We also cover slowly time-varying systems, including upper bounds on the allowable slowness parameters. We illustrate our work using a pendulum model.
Disturbance Rejection for Input-Delay System Using Observer-Predictor-Based Output Feedback Control This article addresses the control problem of an open-loop unstable linear time-invariant system with an input delay and an unknown disturbance. An observer-predictor-based control method is presented for such a system in which only output information is available. First, the system state is reconstructed and the disturbance is estimated using the equivalent-input-disturbance approach. Next, the future information on the state and the disturbance is predicted to reduce the effect of the input delay. Then, a new predictive control scheme is developed. The closed-loop system is simplified into two subsystems for analysis. Stability conditions are derived for each subsystem separately. Finally, a comparison with previous approaches through simulations shows the superiority of the presented method over others for disturbance rejection.
Construction of interval observers for continuous-time systems with discrete measurements. We consider continuous-time systems with input, output and additive disturbances in the particular case where the measurements are only available at discrete instants and have disturbances. To solve a state estimation problem, we construct continuous–discrete interval observers that are asymptotically stable in the absence of disturbances. These interval observers are composed of two copies of the studied system and of a framer, accompanied with appropriate outputs which give, componentwise, upper and lower bounds for the solutions of the studied system.
A fast and elitist multiobjective genetic algorithm: NSGA-II Multi-objective evolutionary algorithms (MOEAs) that use non-dominated sorting and sharing have been criticized mainly for: (1) their O(MN3) computational complexity (where M is the number of objectives and N is the population size); (2) their non-elitism approach; and (3) the need to specify a sharing parameter. In this paper, we suggest a non-dominated sorting-based MOEA, called NSGA-II (Non-dominated Sorting Genetic Algorithm II), which alleviates all of the above three difficulties. Specifically, a fast non-dominated sorting approach with O(MN2) computational complexity is presented. Also, a selection operator is presented that creates a mating pool by combining the parent and offspring populations and selecting the best N solutions (with respect to fitness and spread). Simulation results on difficult test problems show that NSGA-II is able, for most problems, to find a much better spread of solutions and better convergence near the true Pareto-optimal front compared to the Pareto-archived evolution strategy and the strength-Pareto evolutionary algorithm - two other elitist MOEAs that pay special attention to creating a diverse Pareto-optimal front. Moreover, we modify the definition of dominance in order to solve constrained multi-objective problems efficiently. Simulation results of the constrained NSGA-II on a number of test problems, including a five-objective, seven-constraint nonlinear problem, are compared with another constrained multi-objective optimizer, and the much better performance of NSGA-II is observed
Adaptive clustering for mobile wireless networks This paper describes a self-organizing, multihop, mobile radio network which relies on a code-division access scheme for multimedia support. In the proposed network architecture, nodes are organized into nonoverlapping clusters. The clusters are independently controlled, and are dynamically reconfigured as the nodes move. This network architecture has three main advantages. First, it provides spatial reuse of the bandwidth due to node clustering. Second, bandwidth can be shared or reserved in a controlled fashion in each cluster. Finally, the cluster algorithm is robust in the face of topological changes caused by node motion, node failure, and node insertion/removal. Simulation shows that this architecture provides an efficient, stable infrastructure for the integration of different types of traffic in a dynamic radio network
An Algorithm to Improve the Performance of M-Channel Time-Interleaved A-D Converters One method for achieving high-speed waveform digitizing uses time-interleaved A-D Converters (ADCs). It is known that, in this method, using multiple ADCs enables sampling at a rate higher than the sampling rate of the ADC being used. Degradation of the dynamic range, however, results from such factors as phase error in the sampling clock applied to the ADC, and mismatched frequency characteristics among the individual ADCs. This paper describes a method for correcting these mismatches using a digital signal processing (DSP) technique. This method can be applied to any number of interleaved ADCs, and it does not require any additional hardware; good correction and improved accuracy can be obtained simply by adding a little to the computing overhead.
Feature selection for medical diagnosis: Evaluation for cardiovascular diseases Machine learning has emerged as an effective medical diagnostic support system. In a medical diagnosis problem, a set of features that are representative of all the variations of the disease are necessary. The objective of our work is to predict more accurately the presence of cardiovascular disease with reduced number of attributes. We investigate intelligent system to generate feature subset with improvement in diagnostic performance. Features ranked with distance measure are searched through forward inclusion, forward selection and backward elimination search techniques to find subset that gives improved classification result. We propose hybrid forward selection technique for cardiovascular disease diagnosis. Our experiment demonstrates that this approach finds smaller subsets and increases the accuracy of diagnosis compared to forward inclusion and back-elimination techniques.
Current-mode adaptively hysteretic control for buck converters with fast transient response and improved output regulation This paper presents a current-mode adaptively hysteretic control (CMAHC) technique to achieve the fast transient response for DC-DC buck converters. A complementary full range current sensor comprising of chargingpath and discharging-path sensing transistors is proposed to track the inductor current seamlessly. With the proposed current-mode adaptively hysteretic topology, the inductor current is continuously monitored, and the adaptively hysteretic threshold is dynamically adjusted according to the feedback information comes from the output voltage level. Therefore, a fast load-transient response can be achieved. Besides, the output regulation performance is also improved by the proposed dynamic current-scaling circuitry (DCSC). Moreover, the proposed CMAHC topology can be used in a nearly zero RESR design configuration. The prototype fabricated using TSMC 0.25μm CMOS process occupies the area of 1.78mm2 including all bonding pads. Experimental results show that the output voltage ripple is smaller than 30mV over a wide loading current from 0 mA to 500 mA with maximum power conversion efficiency higher than 90%. The recovery time from light to heavy load (100 to 500 mA) is smaller than 5μs.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.044679
0.042551
0.0412
0.035576
0.02213
0.018336
0.0048
0.000222
0
0
0
0
0
0
A 0.013 , 5 , DC-Coupled Neural Signal Acquisition IC With 0.5 V Supply Recent success in brain-machine interfaces has provided hope for patients with spinal-cord injuries, Parkinson's disease, and other debilitating neurological conditions, and has boosted interest in electronic recording of cortical signals. State-of-the-art recording solutions rely heavily on analog techniques at relatively high supply voltages to perform signal conditioning and filtering, leading to large silicon area and limited programmability. We present a neural interface in 65nm CMOS and operating at a 0.5V supply that obtains performance comparable or superior to state-of-the-art systems in a silicon area over 3x smaller. These results are achieved by using a scalable architecture that avoids on-chip passives and takes advantage of high-density logic. The use of 65nm CMOS eases integration with low-power digital systems, while the low supply voltage makes the design more compatible with wireless powering schemes.
The 128-channel fully differential digital integrated neural recording and stimulation interface. We present a fully differential 128-channel integrated neural interface. It consists of an array of 8 X 16 low-power low-noise signal-recording and generation circuits for electrical neural activity monitoring and stimulation, respectively. The recording channel has two stages of signal amplification and conditioning with and a fully differential 8-b column-parallel successive approximation (SAR) analog-to-digital converter (ADC). The total measured power consumption of each recording channel, including the SAR ADC, is 15.5 ¿W. The measured input-referred noise is 6.08 ¿ Vrms over a 5-kHz bandwidth, resulting in a noise efficiency factor of 5.6. The stimulation channel performs monophasic or biphasic voltage-mode stimulation, with a maximum stimulation current of 5 mA and a quiescent power dissipation of 51.5 ¿W. The design is implemented in 0.35-¿m complementary metal-oxide semiconductor technology with the channel pitch of 200 ¿m for a total die size of 3.4 mm × 2.5 mm and a total power consumption of 9.33 mW. The neural interface was validated in in vitro recording of a low-Mg(2+)/high-K(+) epileptic seizure model in an intact hippocampus of a mouse.
Wireless Multichannel Neural Recording With a 128-Mbps UWB Transmitter for an Implantable Brain-Machine Interfaces. Simultaneous recordings of neural activity at large scale, in the long term and under bio-safety conditions, can provide essential data. These data can be used to advance the technology for brain-machine interfaces in clinical applications, and to understand brain function. For this purpose, we present a new multichannel neural recording system that can record up to 4096-channel (ch) electrocortic...
A 200 <formula formulatype="inline"><tex Notation="TeX">$\mu$</tex> </formula>W Eight-Channel EEG Acquisition ASIC for Ambulatory EEG Systems The growing interest toward the improvement of patients&#39; quality of life and the use of medical signals in nonmedical applications such as entertainment, sports, and brain-computerinterfaces, requires the implementation of miniaturized and wireless biopotential acquisition systems with ultralow power dissipation. Therefore, this paper presents the implementation of a complete EEG acquisition ASIC ...
A Low-Noise Area-Efficient Chopped VCO-Based CTDSM for Sensor Applications in 40-nm CMOS. An area-efficient voltage-sensing readout circuit employing chopped voltage-controlled oscillator (VCO)-based continuous-time delta-sigma modulator (CTDSM) is presented in this paper. This VCO-based CTDSM features direct connection to sensors to eliminate pre-amplifier for achieving better hardware efficiency. The VCO is designed as a trans-conductor current-controlled oscillator, which is a fully...
A 15.2-ENOB 5-kHz BW 4.5-µW Chopped CT ΔΣ-ADC for Artifact-Tolerant Neural Recording Front Ends. Implantable closed-loop neural stimulation is desirable for clinical translation and basic neuroscience research. Neural stimulation generates large artifacts at the recording sites, which saturate existing recording front ends. This paper presents a low-power continuous-time delta-sigma analog to digital converter (ADC), which along with an 8x gain capacitively-coupled chopper instrumentation amp...
A 6.5-<italic>μ</italic>W 10-kHz BW 80.4-dB SNDR G<sub>m</sub>-C-Based CT ∆∑ Modulator With a Feedback-Assisted G<sub>m</sub> Linearization for Artifact-Tolerant Neural Recording This article presents a Gm-C-based continuous-time delta-sigma modulator (CTDSM) for artifact-tolerant neural recording interfaces. We propose the feedback-assisted Gm linearization technique, which is applied to the first Gm-C integrator by using a resistive feedback digital-to-analog converter (DAC) in parallel to the degeneration resistor of the input Gm. This enables the input Gm to process the quantization noise, thereby improving the input range and linearity of the Gm-C-based CTDSM, significantly. An energy-efficient second-order loop filter is realized by using a voltage-controlled oscillator (VCO) as the second integrator and a phase quantizer. A proportional-integral (PI) transfer function is employed at the first integrator, which minimizes the output swing while maintaining loop stability. Fabricated in a 110-nm CMOS process, the prototype CTDSM achieves a high input impedance, 300-mVpp linear input range, 80.4-dB signal-to-noise and distortion ratio (SNDR), 81-dB dynamic range (DR), and 76-dB common-mode rejection ratio (CMRR) and consumes only 6.5 μW with a signal bandwidth of 10 kHz. This corresponds to a figure of merit (FoM) of 172.3 dB, which is the state of the art among the neural recording ADCs. This work is also validated through the in vivo experiment.
A High Area-Efficiency 14-bit SAR ADC With Hybrid Capacitor DAC for Array Sensors This paper proposes a high area-efficiency 14-bit column-parallel successive approximation register (SAR) analog-to-digital converter (ADC) for array sensors. A novel hybrid capacitor digital-to-analog converter (CDAC) based on the charge transfer is utilized to increase the area efficiency. It consists of a 9-bit split CDAC and a 5-bit serial CDAC. A foreground digital calibration is employed to compensate for the linearity error caused by the capacitor mismatch and bridge parasitic capacitor. The prototype was designed and fabricated in a 130-nm CMOS technology. Sampling at 200KS/s, the total power consumption is 57 μW. With the digital calibration, the proposed ADC achieves the Spurious Free Dynamic Range (SFDR) of 89.14 dB and the Differential Nonlinearity (DNL) of 0.87/-0.99 LSB. The single ADC occupies an active area of 15 × 1450 μm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> and the area efficiency is only 6.77 μm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> /code.
Blind Calibration of Timing Offsets for Four-Channel Time-Interleaved ADCs In this paper, we describe a blind calibration method for timing mismatches in a four-channel time-interleaved analog-to-digital converter (ADC). The proposed method requires that the input signal should be slightly oversampled. This ensures that there exists a frequency band around the zero frequency where the Fourier transforms of the four ADC subchannels contain only three alias components, ins...
Spatial And Temporal Communication Theory Using Adaptive Antenna Array An adaptive antenna array or a smart antenna is named a software antenna because it can form a desired antenna pattern and adaptively control it if an appropriate set of antenna weights is provided and updated in software. It can be a typical tool for realizing a software radio. An adaptive antenna array can be considered an adaptive filter in space and time domains for radio communications, so the communication theory can be generalized from a conventional time domain into both space and time domains. This article introduces a spatial and temporal communication theory based on an adaptive antenna array, such as spatial and temporal channel modeling, equalization, optimum detection for single-user and multi-user CDMA, precoding in transmitter, and joint optimization of both transmitter and receiver. Such spatial and temporal processing promises significant improvement of performance against multipath fading in mobile radio communications.
On global identifiability for arbitrary model parametrizations It is a fundamental problem of identification to be able—even before the data have been analyzed—to decide if all the free parameters of a model structure can be uniquely recovered from data. This is the issue of global identifiability. In this contribution we show how global identifiability for an arbitrary model structure (basically with analytic non-linearities) can be analyzed using concepts and algorithms from differential algebra. It is shown how the question of global structural identifiability is reduced to the question of whether the given model structure can be rearranged as a linear regression. An explicit algorithm to test this is also given. Furthermore, the question of ‘persistent excitation’ for the input can also be tested explicitly is a similar fashion. The algorithms involved are very well suited for implementation in computer algebra. One such implementation is also described.
A 5.4-Gbit/s Adaptive Continuous-Time Linear Equalizer Using Asynchronous Undersampling Histograms We demonstrate a new type of adaptive continuous-time linear equalizer (CTLE) based on asynchronous undersampling histograms. Our CTLE automatically selects the optimal equalizing filter coefficient among several predetermined values by searching for the coefficient that produces the largest peak value in histograms obtained with asynchronous undersampling. This scheme is simple and robust and does not require clock synchronization for its operation. A prototype chip realized in 0.13-μm CMOS technology successfully achieves equalization for 5.4-Gbit/s 231 - 1 pseudorandom bit sequence data through 40-, 80-, and 120-cm PCB traces and 3-m DisplayPort cable. In addition, we present the results of statistical analysis with which we verify the reliability of our scheme for various sample sizes. The results of this analysis are confirmed with experimental data.
A 40 V 10 W 93%-Efficiency Current-Accuracy-Enhanced Dimmable LED Driver With Adaptive Timing Difference Compensation for Solid-State Lighting Applications This paper presents a floating-buck dimmable LED driver for solid-state lighting applications. In the proposed driver, an adaptive timing difference compensation (ATDC) is developed to adaptively adjust the off-time of the low-side power switch to enable the driver to achieve high accuracy of the average LED current over a wide range of input voltages and number of output LED loads, fast settling time, and high operation frequency. The power efficiency benefits from the capabilities of using synchronous rectifier and having no sensing resistor in the power stage. The synchronous rectification under high input supply voltage is enabled by a proposed high-speed and low-power gate driver with pseudo-digital level shifters. Implemented in a 0.35 μm 50 V CMOS process, experimental results show that the proposed LED driver can operate at 1 MHz and achieve peak power efficiency of 93% to support a wide range of series-connected output LEDs from 1 to 10 and a wide input range from 10 to 40 V. The proposed LED driver has only 2.8% current error from the average LED current of 345 mA and settles within 8.5 μs after triggering the dimming condition, improving the settling time by 14 times compared with the state-of-the-art LED drivers.
A Sub- $\mu$ W Reconfigurable Front-End for Invasive Neural Recording That Exploits the Spectral Characteristics of the Wideband Neural Signal This paper presents a sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> ac-coupled reconfigurable front-end for invasive wideband neural signal recording. The proposed topology embeds filtering capabilities enabling the selection of different frequency bands inside the neural signal spectrum. Power consumption is optimized by defining specific noise targets for each sub-band. These targets take into account the spectral characteristics of wideband neural signals: local field potentials (LFP) exhibit <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mathrm {1/f^{x}}$ </tex-math></inline-formula> magnitude scaling while action potentials (AP) show uniform magnitude across frequency. Additionally, noise targets also consider electrode noise and the spectral distribution of noise sources in the circuit. An experimentally verified prototype designed in a standard 180 nm CMOS process draws 815 nW from a 1 V supply. The front-end is able to select among four different frequency bands (modes) up to 5 kHz. The measured input-referred spot-noise at 500 Hz in the LFP mode (1 Hz - 700 Hz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$55~nV/\sqrt {Hz}$ </tex-math></inline-formula> while the integrated noise in the AP mode (200 Hz - 5 kHz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$4.1~\mu Vrms$ </tex-math></inline-formula> . The proposed front-end achieves sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> operation without penalizing other specifications such as input swing, common-mode or power-supply rejection ratios. It reduces the power consumption of neural front-ends with spectral selectivity by <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$6.1\times $ </tex-math></inline-formula> and, compared with conventional wideband front-ends, it obtains a reduction of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.5\times $ </tex-math></inline-formula> .
1.042169
0.04
0.04
0.04
0.04
0.023263
0.016
0.001333
0.000001
0
0
0
0
0
Design Optimization for Integrated Neural Recording Systems Power and chip area are the most important parameters in designing a neural recording system in vivo. This paper reports a design methodology for an optimized integrated neural recording system. Electrode noise is considered in determining the ADC's resolution to prevent over-design of the ADC, which leads to unnecessary power consumption and chip area. The optimal transconductance and gain of the pre-amplifiers, which minimizes the power-area product of the amplifier, are mathematically derived. A numerical example using actual circuit parameters is shown to demonstrate the design methodology. A tradeoff between the power consumption of the system and the chip area in terms of the multiplexing ratio is investigated and the optimal number of channels per ADC is selected to achieve the minimum power-area product for the entire system. Following the proposed design methodology, a chip has been designed in 0.35 mum CMOS process, with the multiplexing ratio of 16:1, resulting in total chip area of 2.5 mm times 2.0 mm and power consumption of 5.3 mW from plusmn1.65 V.
A 200 <formula formulatype="inline"><tex Notation="TeX">$\mu$</tex> </formula>W Eight-Channel EEG Acquisition ASIC for Ambulatory EEG Systems The growing interest toward the improvement of patients&#39; quality of life and the use of medical signals in nonmedical applications such as entertainment, sports, and brain-computerinterfaces, requires the implementation of miniaturized and wireless biopotential acquisition systems with ultralow power dissipation. Therefore, this paper presents the implementation of a complete EEG acquisition ASIC ...
A 13 μA analog signal processing IC for accurate recognition of multiple intra-cardiac signals. A low-power analog signal processing IC is presented for the low-power heart rhythm analysis. The ASIC features 3 identical, but independent intra-ECG readout channels each equipping an analog QRS feature extractor for low-power consumption and fast diagnosis of the fatal case. A 16-level digitized sine-wave synthesizer together with a synchronous readout circuit can measure bio-impedance in the r...
A 345 µW Multi-Sensor Biomedical SoC With Bio-Impedance, 3-Channel ECG, Motion Artifact Reduction, and Integrated DSP This paper presents a MUlti-SEnsor biomedical IC (MUSEIC). It features a high-performance, low-power analog front-end (AFE) and fully integrated DSP. The AFE has three biopotential readouts, one bio-impedance readout, and support for general-purpose analog sensors The biopotential readout channels can handle large differential electrode offsets ( ±400 mV), achieve high input impedance ( >500 M Ω), low noise ( 620 nVrms in 150 Hz), and large CMRR ( >110 dB) without relying on trimming while consuming only 31 μW/channel. In addition, fully integrated real-time motion artifact reduction, based on simultaneous electrode-tissue impedance measurement, with feedback to the analog domain is supported. The bio-impedance readout with pseudo-sine current generator achieves a resolution of 9.8 m Ω/ √Hz while consuming just 58 μW/channel. The DSP has a general purpose ARM Cortex M0 processor and an HW accelerator optimized for energy-efficient execution of various biomedical signal processing algorithms achieving 10 × or more energy savings in vector multiply-accumulate executions.
A 16-Channel Patient-Specific Seizure Onset and Termination Detection SoC With Impedance-Adaptive Transcranial Electrical Stimulator A 16-channel noninvasive closed-loop beginning-and end-of-seizure detection SoC is presented. The dual-channel charge recycled (DCCR) analog front end (AFE) achieves chopping and time-multiplexing an amplifier between two channels simultaneously which exploits fast-settling DC servo-loop with current consumption and NEF of 0.9 μA/channel and 3.29/channel, respectively. The dual-detector architectu...
Battery-less Tri-band-Radio Neuro-monitor and Responsive Neurostimulator for Diagnostics and Treatment of Neurological Disorders. A 0.13 μm CMOS system on a chip (SoC) for 64 channel neuroelectrical monitoring and responsive neurostimulation is presented. The direct-coupled chopper-stabilized neural recording front end rejects up to ±50 mV input dc offset using an in-channel digitally assisted feedback loop. It yields a compact 0.018 mm2 integration area and 4.2 μVrms integrated input-referred noise over 1 Hz to 1 kHz freque...
A ±50-mV Linear-Input-Range VCO-Based Neural-Recording Front-End With Digital Nonlinearity Correction. Closed-loop neuromodulation is an essential function in future neural implants for delivering efficient and effective therapy. However, a closed-loop system requires the neural-recording front-end to handle large stimulation artifacts-a feature not supported by most state-of-the-art designs. In this paper, we present a neural-recording front-end that has an input range of ±50 mV and can be used in...
An Integrated Power-Efficient Active Rectifier With Offset-Controlled High Speed Comparators for Inductively Powered Applications. We present an active full-wave rectifier with offset-controlled high speed comparators in standard CMOS that provides high power conversion efficiency (PCE) in high frequency (HF) range for inductively powered devices. This rectifier provides much lower dropout voltage and far better PCE compared to the passive on-chip or off-chip rectifiers. The built-in offset-control functions in the comparator...
A study of phase noise in colpitts and LC-tank CMOS oscillators This paper presents a study of phase noise in CMOS Colpitts and LC-tank oscillators. Closed-form symbolic formulas for the 1/f2 phase-noise region are derived for both the Colpitts oscillator (either single-ended or differential) and the LC-tank oscillator, yielding highly accurate results under very general assumptions. A comparison between the differential Colpitts and the LC-tank oscillator is also carried out, which shows that the latter is capable of a 2-dB lower phase-noise figure-of-merit (FoM) when simplified oscillator designs and ideal MOS models are adopted. Several prototypes of both Colpitts and LC-tank oscillators have been implemented in a 0.35-μm CMOS process. The best performance of the LC-tank oscillators shows a phase noise of -142dBc/Hz at 3-MHz offset frequency from a 2.9-GHz carrier with a 16-mW power consumption, resulting in an excellent FoM of ∼189 dBc/Hz. For the same oscillation frequency, the FoM displayed by the differential Colpitts oscillators is ∼5 dB lower.
From few to many: illumination cone models for face recognition under variable lighting and pose We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render驴or synthesize驴images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone (based on Euclidean distance within the image space). We test our face recognition method on 4,050 images from the Yale Face Database B; these images contain 405 viewing conditions (9 poses $\times$ 45 illumination conditions) for 10 individuals. The method performs almost without error, except on the most extreme lighting directions, and significantly outperforms popular recognition methods that do not use a generative model.
WHISK: an uncore architecture for dynamic information flow tracking in heterogeneous embedded SoCs In this paper, we describe for the first time, how Dynamic Information Flow Tracking (DIFT) can be implemented for heterogeneous designs that contain one or more on-chip accelerators attached to a network-on-chip. We observe that implementing DIFT for such systems requires holistic platform level view, i.e., designing individual components in the heterogeneous system to be capable of supporting DIFT is necessary but not sufficient to correctly implement full-system DIFT. Based on this observation we present a new system architecture for implementing DIFT, and also describe wrappers that provide DIFT functionality for third-party IP components. Results show that our implementation minimally impacts performance of programs that do not utilize DIFT, and the price of security is constant for modest amounts of tagging and then sub-linearly increases with the amount of tagging.
All-Digital Background Calibration Technique for Time-Interleaved ADC Using Pseudo Aliasing Signal A new digital background calibration technique for gain mismatches and sample-time mismatches in a Time-Interleaved Analog-to-Digital Converter (TI-ADC) is presented to reduce the circuit area. In the proposed technique, the gain mismatches and the sample-time mismatches are calibrated by using pseudo aliasing signals instead of using a bank of adaptive FIR filters which is conventionally utilized. The pseudo aliasing signals are generated and subtracted from an ADC output. A pseudo aliasing generator consists of the Hadamard transform and a fixed FIR filter. In case of a two-channel 10-bit TI-ADC, the proposed technique reduces the requirement for a word length of the FIR filter by about 50% without a look-up table (LUT) compared with the conventional technique. In addition, the proposed technique requires only one FIR filter compared with the bank of adaptive filters which requires (M-1) FIR filters in an M-channel TI-ADC.
Exploration of Constantly Connected Dynamic Graphs Based on Cactuses. We study the problem of exploration by a mobile entity (agent) of a class of dynamic networks, namely constantly connected dynamic graphs. This problem has already been studied in the case where the agent knows the dynamics of the graph and the underlying graph is a ring of n vertices [5]. In this paper, we consider the same problem and we suppose that the underlying graph is a cactus graph (a connected graph in which any two simple cycles have at most one vertex in common). We propose an algorithm that allows the agent to explore these dynamic graphs in at most 2(O)(root log n)(n) time units. We show that the lower bound of the algorithm is 2(Omega)(root log n)(n) time units.
A VCO-Based Nonuniform Sampling ADC Using a Slope-Dependent Pulse Generator This paper presents a voltage-controlled oscillator (VCO)-based nonuniform sampling analog-to-digital converter (ADC) as an alternative to the level-crossing (LC)-based converters for digitizing biopotential signals. This work aims to provide a good signal-to-noise-and-distortion ratio at a low average sampling rate. In the proposed conversion method, a slope-dependent pulse generation block is used to provide a variable sample rate adjusted according to the input signal's slope. Simulation results show that the introduced method meets a target reconstruction quality with a sampling rate approaching 92 Sps, while on the same MIT-BIH Arrhythmia N 106 ECG benchmark, the classic LC-based approach requires a sampling rate higher than 500 Sps. The benefits of the proposed method are more remarkable when the input signal is very noisy. The proposed ADC achieves a compression ratio close to 4, but with only 5.4% root-mean-square difference when tested using the MIT-BIH Arrhythmia Database.
1.044807
0.04
0.04
0.04
0.021059
0.013333
0.005278
0.000898
0
0
0
0
0
0
A Low-Jitter and Low-Reference-Spur Ring-VCO-Based Switched-Loop Filter PLL Using a Fast Phase-Error Correction Technique. A low-jitter and low-reference-spur ring-type voltage-controlled oscillator (VCO)-based switched-loop filter (SLF) phase-locked loop (PLL) is presented. To enhance the capability of suppressing jitter of a VCO, we propose a fast phase-error correction (FPEC) technique that emulates the phase-realignment mechanism of an injection-locked clock multiplier. By the proposed FPEC technique, accumulated ...
An Ultra-Low-Jitter, mmW-Band Frequency Synthesizer Based on Digital Subsampling PLL Using Optimally Spaced Voltage Comparators This article presents a cascaded architecture of a frequency synthesizer to generate ultra-low-jitter output signals in a millimeter-wave (mmW) frequency band from 28 to 31 GHz. The mmW-band injection-locked frequency multiplier (ILFM) placed at the second stage has a wide bandwidth so that the performance of the jitter of this frequency synthesizer is determined by the GHz-band, digital subsampling phase-locked loop (SSPLL) at the first stage. To suppress the quantization noise of the digital SSPLL while using a small amount of power, the optimally spaced voltage comparators (OSVCs) are presented as a voltage quantizer. This article was designed and fabricated using 65-nm CMOS technology. In measurements, this prototype frequency synthesizer generated output signals in the range of 28–31 GHz, with an rms jitter of less than 80 fs and an integrated phase noise (IPN) of less than −40 dBc. The active silicon area was 0.32 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> , and the total power consumption was 41.8 mW.
Digital Background Correction of Harmonic Distortion in Pipelined ADCs. Pipelined analog-to-digital converters (ADCs) are sensitive to distortion introduced by the residue amplifiers in their first few stages. Unfortunately, residue amplifier distortion tends to be inversely related to power consumption in practice, so the residue amplifiers usually are the dominant consumers of power in high-resolution pipelined ADCs. This paper presents a background calibration tech...
A 1.6-to-3.0-GHz Fractional-N MDLL with a Digital-to-Time Converter Range-Reduction Technique Achieving 397fs Jitter at 2.5-mW Power. This article analyzes the jitter-power tradeoff in multiplying delay-locked loops (MDLLs), which differs from the more typical phase-locked loop one, and identifies a design optimization criterion. The methodology is applied to a fractional- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${N}$ </tex-math></inline-formula> MDLL with a sub-sampling bang-bang phase detector and a novel digital-to-time converter (DTC) range-reduction technique, which limits the jitter added to the reference signal, at no additional power penalty. The prototype has been implemented in 65-nm CMOS and covers a 1.6-to-3.0-GHz tuning range, achieving an absolute rms jitter (integrated from 30 kHz to 30 MHz) of 397 fs at 2.5-mW power, with a corresponding jitter-power figure of merit of −244 dB. In-band fractional spurs are as low as −51.5 dB and the occupied core area is 0.0275 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> .
Jitter Minimization in Digital PLLs with Mid-Rise TDCs This paper analyzes the absolute jitter performance of digital phase-locked loops and compares the case when either a multi-bit time-to-digital converter with mid-rise characteristic or a bang-bang phase detector is adopted. The linear equivalent model of the PLL and expressions for random-noise and limit-cycle jitter are first derived for the case of a 2-bit time-to-digital converter with a mid-rise characteristic, and the optimal TDC resolution is determined. The analysis, which account for TDC mismatches, shows that, compared to the 1-bit one, the 2-bit time-to-digital converter can substantially reduce the quantization noise in the case of dominant random-walk noise at the TDC input. Moving to the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$N_{b}$ </tex-math></inline-formula> -bit midrise TDC case, the quantization noise can be further reduced at the cost of higher complexity and finer time resolution. The choice of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$N_{b}=2$ </tex-math></inline-formula> seems to be the best compromise between jitter reduction and complexity increase. Time-domain simulations assess the theoretical framework and demonstrate the validity of the assumptions made throughout the paper.
A Fully Synthesizable All-Digital PLL With Interpolative Phase Coupled Oscillator, Current-Output DAC, and Fine-Resolution Digital Varactor Using Gated Edge Injection Technique This paper presents a fully synthesizable phase-locked loop (PLL) based on injection locking, with an interpolative phase-coupled oscillator, a current output digital-to-analog converter (DAC), and a fine resolution digital varactor. All circuits that make up the PLL are designed and implemented using digital standard cells without any modification, and automatically Place-and-routed (P&R) by a digital design flow without any manual placement. Implemented in a 65 nm digital CMOS process, this work occupies only 110 μm × 60 μm layout area, which is the smallest PLL reported so far to the best knowledge of the authors. The measurement results show that this work achieves a 1.7 ps RMS jitter at 900 MHz output frequency while consuming 780 μW DC power.
Impossibility of distributed consensus with one faulty process The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the “Byzantine Generals” problem.
A fast and elitist multiobjective genetic algorithm: NSGA-II Multi-objective evolutionary algorithms (MOEAs) that use non-dominated sorting and sharing have been criticized mainly for: (1) their O(MN3) computational complexity (where M is the number of objectives and N is the population size); (2) their non-elitism approach; and (3) the need to specify a sharing parameter. In this paper, we suggest a non-dominated sorting-based MOEA, called NSGA-II (Non-dominated Sorting Genetic Algorithm II), which alleviates all of the above three difficulties. Specifically, a fast non-dominated sorting approach with O(MN2) computational complexity is presented. Also, a selection operator is presented that creates a mating pool by combining the parent and offspring populations and selecting the best N solutions (with respect to fitness and spread). Simulation results on difficult test problems show that NSGA-II is able, for most problems, to find a much better spread of solutions and better convergence near the true Pareto-optimal front compared to the Pareto-archived evolution strategy and the strength-Pareto evolutionary algorithm - two other elitist MOEAs that pay special attention to creating a diverse Pareto-optimal front. Moreover, we modify the definition of dominance in order to solve constrained multi-objective problems efficiently. Simulation results of the constrained NSGA-II on a number of test problems, including a five-objective, seven-constraint nonlinear problem, are compared with another constrained multi-objective optimizer, and the much better performance of NSGA-II is observed
Language-based information-flow security Current standard security practices do not provide substantial assurance that the end-to-end behavior of a computing system satisfies important security policies such as confidentiality. An end-to-end confidentiality policy might assert that secret input data cannot be inferred by an attacker through the attacker's observations of system output; this policy regulates information flow. Conventional security mechanisms such as access control and encryption do not directly address the enforcement of information-flow policies. Previously, a promising new approach has been developed: the use of programming-language techniques for specifying and enforcing information-flow policies. In this paper, we survey the past three decades of research on information-flow security, particularly focusing on work that uses static program analysis to enforce information-flow policies. We give a structured view of work in the area and identify some important open challenges.
An Introduction To Compressive Sampling Conventional approaches to sampling signals or images follow Shannon&#39;s theorem: the sampling rate must be at least twice the maximum frequency present in the signal (Nyquist rate). In the field of data conversion, standard analog-to-digital converter (ADC) technology implements the usual quantized Shannon representation - the signal is uniformly sampled at or above the Nyquist rate. This article s...
Bundled execution of recurring traces for energy-efficient general purpose processing Technology scaling has delivered on its promises of increasing device density on a single chip. However, the voltage scaling trend has failed to keep up, introducing tight power constraints on manufactured parts. In such a scenario, there is a need to incorporate energy-efficient processing resources that can enable more computation within the same power budget. Energy efficiency solutions in the past have typically relied on application specific hardware and accelerators. Unfortunately, these approaches do not extend to general purpose applications due to their irregular and diverse code base. Towards this end, we propose BERET, an energy-efficient co-processor that can be configured to benefit a wide range of applications. Our approach identifies recurring instruction sequences as phases of "temporal regularity" in a program's execution, and maps suitable ones to the BERET hardware, a three-stage pipeline with a bundled execution model. This judicious off-loading of program execution to a reduced-complexity hardware demonstrates significant savings on instruction fetch, decode and register file accesses energy. On average, BERET reduces energy consumption by a factor of 3-4X for the program regions selected across a range of general-purpose and media applications. The average energy savings for the entire application run was 35% over a single-issue in-order processor.
A 41-phase switched-capacitor power converter with 3.8mV output ripple and 81% efficiency in baseline 90nm CMOS.
Current-mode adaptively hysteretic control for buck converters with fast transient response and improved output regulation This paper presents a current-mode adaptively hysteretic control (CMAHC) technique to achieve the fast transient response for DC-DC buck converters. A complementary full range current sensor comprising of chargingpath and discharging-path sensing transistors is proposed to track the inductor current seamlessly. With the proposed current-mode adaptively hysteretic topology, the inductor current is continuously monitored, and the adaptively hysteretic threshold is dynamically adjusted according to the feedback information comes from the output voltage level. Therefore, a fast load-transient response can be achieved. Besides, the output regulation performance is also improved by the proposed dynamic current-scaling circuitry (DCSC). Moreover, the proposed CMAHC topology can be used in a nearly zero RESR design configuration. The prototype fabricated using TSMC 0.25μm CMOS process occupies the area of 1.78mm2 including all bonding pads. Experimental results show that the output voltage ripple is smaller than 30mV over a wide loading current from 0 mA to 500 mA with maximum power conversion efficiency higher than 90%. The recovery time from light to heavy load (100 to 500 mA) is smaller than 5μs.
An Energy-Efficient SAR ADC With Event-Triggered Error Correction This brief presents an energy-efficient fully differential 10-bit successive approximation register (SAR) analog-to-digital converter (ADC) with a resolution of 10 bits and a sampling rate of 320 kS/s. The optimal capacitor split and bypass number is analyzed to achieve the highest switching energy efficiency. The common-mode voltage level remains constant during the MSB-capacitor switching cycles. To minimize nonlinearity due to charge averaging voltage offset or DAC array mismatch, an event-triggered error correction method is employed as a redundant cycle for detecting digital code errors within 1 least significant bit (LSB). A test chip was fabricated using the 180-nm CMOS process and occupied a 0.0564-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> core area. Under a regular 0.65-V supply voltage, the ADC achieved an effective number of bits of 9.61 bits and a figure of merit (FOM) of 6.38 fJ/conversion step, with 1.6- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${ \mu }\text{W}$ </tex-math></inline-formula> power dissipation for a low-frequency input. The measured differential and integral nonlinearity results are within 0.30 LSB and 0.43 LSB, respectively.
1.1
0.1
0.1
0.1
0.1
0.025
0
0
0
0
0
0
0
0
TraNNsformer: Neural Network Transformation for Memristive Crossbar based Neuromorphic System Design. Implementation of Neuromorphic Systems using post Complementary Metal-Oxide-Semiconductor (CMOS) technology based Memristive Crossbar Array (MCA) has emerged as a promising solution to enable low-power acceleration of neural networks. However, the recent trend to design Deep Neural Networks (DNNs) for achieving human-like cognitive abilities poses significant challenges towards the scalable design of neuromorphic systems (due to the increase in computation/storage demands). Network pruning [7] is a powerful technique to remove redundant connections for designing optimally connected (maximally sparse) DNNs. However, such pruning techniques induce irregular connections that are incoherent to the crossbar structure. Eventually they produce DNNs with highly inefficient hardware realizations (in terms of area and energy). In this work, we propose TraNNsformer - an integrated training framework that transforms DNNs to enable their efficient realization on MCA-based systems. TraNNsformer first prunes the connectivity matrix while forming clusters with the remaining connections. Subsequently, it retrains the network to fine tune the connections and reinforce the clusters. This is done iteratively to transform the original connectivity into an optimally pruned and maximally clustered mapping. We evaluated the proposed framework by transforming different Multi-Layer Perceptron (MLP) based Spiking Neural Networks (SNNs) on a wide range of datasets (MNIST, SVHN and CIFAR10) and executing them on MCA-based systems to analyze the area and energy benefits. Without accuracy loss, TraNNsformer reduces the area (energy) consumption by 28% - 55% (49% - 67%) with respect to the original network. Compared to network pruning, TraNNsformer achieves 28% - 49% (15% - 29%) area (energy) savings. Furthermore, TraNNsformer is a technology-aware framework that allows mapping a given DNN to any MCA size permissible by the memristive technology for reliable operations.
Reduction and IR-drop compensations techniques for reliable neuromorphic computing systems Neuromorphic computing system (NCS) is a promising architecture to combat the well-known memory bottleneck in Von Neumann architecture. The recent breakthrough on memristor devices made an important step toward realizing a low-power, small-footprint NCS on-a-chip. However, the currently low manufacturing reliability of nano-devices and the voltage IR-drop along metal wires and memristors arrays severely limits the scale of memristor crossbar based NCS and hinders the design scalability. In this work, we propose a novel system reduction scheme that significantly lowers the required dimension of the memristor crossbars in NCS while maintaining high computing accuracy. An IR-drop compensation technique is also proposed to overcome the adverse impacts of the wire resistance and the sneak-path problem in large memristor crossbar designs. Our simulation results show that the proposed techniques can improve computing accuracy by 27.0% and 38.7% less circuit area compared to the original NCS design.
Spin-Transfer Torque Memories: Devices, Circuits, and Systems. Spin-transfer torque magnetic memory (STT-MRAM) has gained significant research interest due to its nonvolatility and zero standby leakage, near unlimited endurance, excellent integration density, acceptable read and write performance, and compatibility with CMOS process technology. However, several obstacles need to be overcome for STT-MRAM to become the universal memory technology. This paper fi...
Technology Aware Training in Memristive Neuromorphic Systems based on non-ideal Synaptic Crossbars. The advances in the field of machine learning using neuromorphic systems have paved the pathway for extensive research on possibilities of hardware implementations of neural networks. Various memristive technologies such as oxide-based devices, spintronics, and phase change materials have been explored to implement the core functional units of neuromorphic systems, namely the synaptic network, and...
BLADE: An in-Cache Computing Architecture for Edge Devices Area and power-constrained edge devices are increasingly utilized to perform compute intensive workloads, necessitating increasingly area and power-efficient accelerators. In this context, in-SRAM computing performs hundreds of parallel operations on spatially local data common in many emerging workloads, while reducing power consumption due to data movement. However, in-SRAM computing faces many challenges, including integration into the existing architecture, arithmetic operation support, data corruption at high operating frequencies, inability to run at low voltages, and low area density. To meet these challenges, this article introduces BLADE, a BitLine Accelerator for Devices on the Edge. BLADE is an in-SRAM computing architecture that utilizes local wordline groups to perform computations at a frequency 2.8× higher than state-of-the-art in-SRAM computing architectures. BLADE is integrated into the cache hierarchy of low-voltage edge devices, and simulated and benchmarked at the transistor, architecture, and software abstraction levels. Experimental results demonstrate performance/energy gains over an equivalent NEON accelerated processor for a variety of edge device workloads, namely, cryptography (4× performance gain/6× energy reduction), video encoding (6×/2×), and convolutional neural networks (3×/1.5×), while maintaining the highest frequency/energy ratio (up to 2.2 Ghz@1V) of any conventional in-SRAM computing architecture, and a low area overhead of less than 8 percent.
An Embedded nand Flash-Based Compute-In-Memory Array Demonstrated in a Standard Logic Process A neural network hardware inspired by the 3-D NAND flash array structure was experimentally demonstrated in a standard 65-nm CMOS process. Logic-compatible embedded flash memory cells were used for storing multi-level synaptic weights while a bit-serial architecture enables 8 bit <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\times $ </tex-math></inline-formula> 8 bit multiply-and-accumulate operation. A novel back-pattern tolerant program-verify scheme reduces the cell current variation to less than <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$0.6~\mu \text{A}$ </tex-math></inline-formula> . Positive and negative weights are stored in adjacent bitlines, generating a differential output signal. Our eNAND-based neural network core achieves a 98.5% handwritten digit recognition accuracy which is within 0.5% of the software accuracy for the same weight precision. To the best of our knowledge, this work represents the first physical demonstration of an embedded NAND flash-based compute-in-memory chip in a standard logic process.
Fundamental limits on the precision of in-memory architectures ABSTRACTThis paper obtains the fundamental limits on the computational precision of in-memory computing architectures (IMCs). Various compute SNR metrics for IMCs are defined and their interrelationships analyzed to show that the accuracy of IMCs is fundamentally limited by the compute SNR (SNRa) of its analog core, and that activation, weight and output precision needs to be assigned appropriately for the final output SNR SNRT → SNRa. The minimum precision criterion (MPC) is proposed to minimize the output and hence the column analog-to-digital converter (ADC) precision. The charge summing (QS) compute model and its associated IMC QS-Arch are studied to obtain analytical models for its compute SNR, minimum ADC precision, energy and latency. Compute SNR models of QS-Arch are validated via Monte Carlo simulations in a 65 nm CMOS process. Employing these models, upper bounds on SNRa of a QS-Arch-based IMC employing a 512 row SRAM array are obtained and it is shown that QS-Arch's energy cost reduces by 3.3× for every 6 dB drop in SNRa, and that the maximum achievable SNRa reduces with technology scaling while the energy cost at the same SNRa increases. These models also indicate the existence of an upper bound on the dot product dimension N due to voltage headroom clipping, and this bound can be doubled for every 3 dB drop in SNRa.
Max-Min D-Cluster Formation in Wireless Ad Hoc Networks An ad hoc network may be logically represented as a set of clusters. The clusterheads form a -hop dominating set. Each node is at most hops from a clusterhead. Clusterheads form a virtual backbone and may be used to route packets for nodes in their cluster. Previous heuristics restricted themselves to -hop clusters. We show that the minimum -hop dominating set problem is NP-complete. Then we present a heuristic to form -clusters in a wireless ad hoc network. Nodes are assumed to have non-deterministic mobility pattern. Clusters are formed by diffusing node identities along the wireless links. When the heuristic terminates, a node either becomes a clusterhead, or is at most wireless hops away from its clusterhead. The value of is a parameter of the heuristic. The heuristic can be run either at regular intervals, or whenever the network configura- tion changes. One of the features of the heuristic is that it t ends to re-elect existing clusterheads even when the network configuration c hanges. This helps to reduce the communication overheads during transition from old clusterheads to new clusterheads. Also, there is a tendency to evenly dis- tribute the mobile nodes among the clusterheads, and evently distribute the responsibility of acting as clusterheads among all nodes. Thus, the heuristic is fair and stable. Simulation experiments demonstrate that the proposed heuristic is better than the two earlier heuristics, namely the LCA (1) and Degree based (11) solutions.
Cellular Logic-in-Memory Arrays As a direct consequence of large-scale integration, many advantages in the design, fabrication, testing, and use of digital circuitry can be achieved if the circuits can be arranged in a two-dimensional iterative, or cellular, array of identical elementary networks, or cells. When a small amount of storage is included in each cell, the same array may be regarded either as a logically enhanced memory array, or as a logic array whose elementary gates and connections can be "programmed" to realize a desired logical behavior.
On implementing omega with weak reliability and synchrony assumptions We study the feasibility and cost of implementing Ω---a fundamental failure detector at the core of many algorithms---in systems with weak reliability and synchrony assumptions. Intuitively, Ω allows processes to eventually elect a common leader. We first give an algorithm that implements Ω in a weak system S where processes are synchronous, but: (a) any number of them may crash, and (b) only the output links of an unknown correct process are eventually timely (all other links can be asynchronous and/or lossy). This is in contrast to previous implementations of Ω which assume that a quadratic number of links are eventually timely, or systems that are strong enough to implement the eventually perfect failure detector P. We next show that implementing Ω in S is expensive: even if we want an implementation that tolerates just one process crash, all correct processes (except possibly one) must send messages forever; moreover, a quadratic number of links must carry messages forever. We then show that with a small additional assumption---the existence of some unknown correct process whose asynchronous links are lossy but fair---we can implement Ω efficiently: we give an algorithm for Ω such that eventually only one process (the elected leader) sends messages.
A 5-Gb/s ADC-Based Feed-Forward CDR in 65 nm CMOS This paper presents an ADC-based CDR that blindly samples the received signal at twice the data rate and uses these samples to directly estimate the locations of zero crossings for the purpose of clock and data recovery. We successfully confirmed the operation of the proposed CDR architecture at 5 Gb/s. The receiver is implemented in 65 nm CMOS, occupies 0.51 mm(2) and consumes 178.4 mW at 5 Gb/s.
Analysis and Design of Passive Polyphase Filters Passive RC polyphase filters (PPFs) are analyzed in detail in this paper. First, a method to calculate the output signals of an n-stage PPF is presented. As a result, all relevant properties of PPFs, such as amplitude and phase imbalance and loss, are calculated. The rules for optimal pole frequency planning to maximize the image-reject ratio provided by a PPF are given. The loss of PPF is divided into two factors, namely the intrinsic loss caused by the PPF itself and the loss caused by termination impedances. Termination impedances known a priori can be used to derive such component values, which minimize the overall loss. The effect of parasitic capacitance and component value deviation are analyzed and discussed. The method of feeding the input signal to the first PPF stage affects the mechanisms of the whole PPF. As a result, two slightly different PPF topologies can be distinguished, and they are separately analyzed and compared throughout this paper. A design example is given to demonstrate the developed design procedure.
The rise of "big data" on cloud computing: Review and open research issues. Cloud computing is a powerful technology to perform massive-scale and complex computing. It eliminates the need to maintain expensive computing hardware, dedicated space, and software. Massive growth in the scale of data or big data generated through cloud computing has been observed. Addressing big data is a challenging and time-demanding task that requires a large computational infrastructure to ensure successful data processing and analysis. The rise of big data in cloud computing is reviewed in this study. The definition, characteristics, and classification of big data along with some discussions on cloud computing are introduced. The relationship between big data and cloud computing, big data storage systems, and Hadoop technology are also discussed. Furthermore, research challenges are investigated, with focus on scalability, availability, data integrity, data transformation, data quality, data heterogeneity, privacy, legal and regulatory issues, and governance. Lastly, open research issues that require substantial research efforts are summarized.
A 0.5 V 10-bit 3 MS/s SAR ADC With Adaptive-Reset Switching Scheme and Near-Threshold Voltage-Optimized Design Technique This brief presents a 10-bit ultra-low power energy-efficient successive approximation register (SAR) analog-to-digital converter (ADC). A new adaptive-reset switching scheme is proposed to reduce the switching energy of the capacitive digital-to-analog converter (CDAC). The proposed adaptive-reset switching scheme reduces the average switching energy of the CDAC by 90% compared to the conventional scheme without the common-mode voltage variation. In addition, the near-threshold voltage (NTV)-optimized digital library is adopted to alleviate the performance degradation in the ultra-low supply voltage while simultaneously increasing the energy efficiency. The NTV-optimized design technique is also introduced to the bootstrapped switch design to improve the linearity of the sample-and-hold circuit. The test chip is fabricated in a 65 nm CMOS, and its core area is 0.022 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> . At a supply of 0.5 V and sampling speed of 3 MS/s, the SAR ADC achieves an ENOB of 8.78 bit and consumes <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$3.09~{\boldsymbol{\mu }}\text{W}$ </tex-math></inline-formula> . The resultant Walden figure-of-merit (FoM) is 2.34 fJ/conv.-step.
1.2
0.2
0.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
ShiDianNao: shifting vision processing closer to the sensor In recent years, neural network accelerators have been shown to achieve both high energy efficiency and high performance for a broad application scope within the important category of recognition and mining applications. Still, both the energy efficiency and performance of such accelerators remain limited by memory accesses. In this paper, we focus on image applications, arguably the most important category among recognition and mining applications. The neural networks which are state-of-the-art for these applications are Convolutional Neural Networks (CNN), and they have an important property: weights are shared among many neurons, considerably reducing the neural network memory footprint. This property allows to entirely map a CNN within an SRAM, eliminating all DRAM accesses for weights. By further hoisting this accelerator next to the image sensor, it is possible to eliminate all remaining DRAM accesses, i.e., for inputs and outputs. In this paper, we propose such a CNN accelerator, placed next to a CMOS or CCD sensor. The absence of DRAM accesses combined with a careful exploitation of the specific data access patterns within CNNs allows us to design an accelerator which is 60&times more energy efficient than the previous state-of-the-art neural network accelerator. We present a full design down to the layout at 65 nm, with a modest footprint of 4.86mm2 and consuming only 320mW, but still about 30× faster than high-end GPUs.
HASCO: Towards Agile HArdware and Software CO-design for Tensor Computation Tensor computations overwhelm traditional general-purpose computing devices due to the large amounts of data and operations of the computations. They call for a holistic solution composed of both hardware acceleration and software mapping. Hardware/software (HW/SW) co-design optimizes the hardware and software in concert and produces high-quality solutions. There are two main challenges in the co-design flow. First, multiple methods exist to partition tensor computation and have different impacts on performance and energy efficiency. Besides, the hardware part must be implemented by the intrinsic functions of spatial accelerators. It is hard for programmers to identify and analyze the partitioning methods manually. Second, the overall design space composed of HW/SW partitioning, hardware optimization, and software optimization is huge. The design space needs to be efficiently explored. To this end, we propose an agile co-design approach HASCO that provides an efficient HW/SW solution to dense tensor computation. We use tensor syntax trees as the unified IR, based on which we develop a two-step approach to identify partitioning methods. For each method, HASCO explores the hardware and software design spaces. We propose different algorithms for the explorations, as they have distinct objectives and evaluation costs. Concretely, we develop a multi-objective Bayesian optimization algorithm to explore hardware optimization. For software optimization, we use heuristic and Q-learning algorithms. Experiments demonstrate that HASCO achieves a 1.25X to 1.44X latency reduction through HW/SW co-design compared with developing the hardware and software separately.
On the evolution of user interaction in Facebook Online social networks have become extremely popular; numerous sites allow users to interact and share content using social links. Users of these networks often establish hundreds to even thousands of social links with other users. Recently, researchers have suggested examining the activity network - a network that is based on the actual interaction between users, rather than mere friendship - to distinguish between strong and weak links. While initial studies have led to insights on how an activity network is structurally different from the social network itself, a natural and important aspect of the activity network has been disregarded: the fact that over time social links can grow stronger or weaker. In this paper, we study the evolution of activity between users in the Facebook social network to capture this notion. We find that links in the activity network tend to come and go rapidly over time, and the strength of ties exhibits a general decreasing trend of activity as the social network link ages. For example, only 30% of Facebook user pairs interact consistently from one month to the next. Interestingly, we also find that even though the links of the activity network change rapidly over time, many graph-theoretic properties of the activity network remain unchanged.
Coarse grain reconfigurable architecture (embedded tutorial) The paper gives a brief survey over a decade of R&D on coarse grain reconfigurable hardware and related compilation techniques and points out its significance to the emerging discipline of reconfigurable computing.
Cambricon-F: machine learning computers with fractal von neumann architecture Machine learning techniques are pervasive tools for emerging commercial applications and many dedicated machine learning computers on different scales have been deployed in embedded devices, servers, and data centers. Currently, most machine learning computer architectures still focus on optimizing performance and energy efficiency instead of programming productivity. However, with the fast development in silicon technology, programming productivity, including programming itself and software stack development, becomes the vital reason instead of performance and power efficiency that hinders the application of machine learning computers. In this paper, we propose Cambricon-F, which is a series of homogeneous, sequential, multi-layer, layer-similar, machine learning computers with the same ISA. A Cambricon-F machine has a fractal von Neumann architecture to iteratively manage its components: it is with von Neumann architecture and its processing components (sub-nodes) are still Cambricon-F machines with von Neumann architecture and the same ISA. Since different Cambricon-F instances with different scales can share the same software stack on their common ISA, Cambricon-Fs can significantly improve the programming productivity. Moreover, we address four major challenges in Cambricon-F architecture design, which allow Cambricon-F to achieve a high efficiency. We implement two Cambricon-F instances at different scales, i.e., Cambricon-F100 and Cambricon-F1. Compared to GPU based machines (DGX-1 and 1080Ti), Cambricon-F instances achieve 2.82x, 5.14x better performance, 8.37x, 11.39x better efficiency on average, with 74.5%, 93.8% smaller area costs, respectively.
DaDianNao: A Machine-Learning Supercomputer Many companies are deploying services, either for consumers or industry, which are largely based on machine-learning algorithms for sophisticated processing of large amounts of data. The state-of-the-art and most popular such machine-learning algorithms are Convolutional and Deep Neural Networks (CNNs and DNNs), which are known to be both computationally and memory intensive. A number of neural network accelerators have been recently proposed which can offer high computational capacity/area ratio, but which remain hampered by memory accesses. However, unlike the memory wall faced by processors on general-purpose workloads, the CNNs and DNNs memory footprint, while large, is not beyond the capability of the on chip storage of a multi-chip system. This property, combined with the CNN/DNN algorithmic characteristics, can lead to high internal bandwidth and low external communications, which can in turn enable high-degree parallelism at a reasonable area cost. In this article, we introduce a custom multi-chip machine-learning architecture along those lines. We show that, on a subset of the largest known neural network layers, it is possible to achieve a speedup of 450.65x over a GPU, and reduce the energy by 150.31x on average for a 64-chip system. We implement the node down to the place and route at 28nm, containing a combination of custom storage and computational units, with industry-grade interconnects.
ExTensor: An Accelerator for Sparse Tensor Algebra Generalized tensor algebra is a prime candidate for acceleration via customized ASICs. Modern tensors feature a wide range of data sparsity, with the density of non-zero elements ranging from 10-6% to 50%. This paper proposes a novel approach to accelerate tensor kernels based on the principle of hierarchical elimination of computation in the presence of sparsity. This approach relies on rapidly finding intersections---situations where both operands of a multiplication are non-zero---enabling new data fetching mechanisms and avoiding memory latency overheads associated with sparse kernels implemented in software. We propose the ExTensor accelerator, which builds these novel ideas on handling sparsity into hardware to enable better bandwidth utilization and compute throughput. We evaluate ExTensor on several kernels relative to industry libraries (Intel MKL) and state-of-the-art tensor algebra compilers (TACO). When bandwidth normalized, we demonstrate an average speedup of 3.4×, 1.3×, 2.8×, 24.9×, and 2.7× on SpMSpM, SpMM, TTV, TTM, and SDDMM kernels respectively over a server class CPU.
Eyeriss v2: A Flexible Accelerator for Emerging Deep Neural Networks on Mobile Devices A recent trend in deep neural network (DNN) development is to extend the reach of deep learning applications to platforms that are more resource and energy-constrained, e.g., mobile devices. These endeavors aim to reduce the DNN model size and improve the hardware processing efficiency and have resulted in DNNs that are much more <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">compact</italic> in their structures and/or have high data <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">sparsity</italic> . These compact or sparse models are different from the traditional large ones in that there is much more variation in their layer shapes and sizes and often require specialized hardware to exploit sparsity for performance improvement. Therefore, many DNN accelerators designed for large DNNs do not perform well on these models. In this paper, we present Eyeriss v2, a DNN accelerator architecture designed for running compact and sparse DNNs. To deal with the widely varying layer shapes and sizes, it introduces a highly flexible on-chip network, called hierarchical mesh, that can adapt to the different amounts of data reuse and bandwidth requirements of different data types, which improves the utilization of the computation resources. Furthermore, Eyeriss v2 can process sparse data directly in the compressed domain for both weights and activations and therefore is able to improve both processing speed and energy efficiency with sparse models. Overall, with sparse MobileNet, Eyeriss v2 in a 65-nm CMOS process achieves a throughput of 1470.6 inferences/s and 2560.3 inferences/J at a batch size of 1, which is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$12.6\times $ </tex-math></inline-formula> faster and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.5\times $ </tex-math></inline-formula> more energy-efficient than the original Eyeriss running MobileNet.
Procrustes: a Dataflow and Accelerator for Sparse Deep Neural Network Training The success of DNN pruning has led to the development of energy-efficient inference accelerators that support pruned models with sparse weight and activation tensors. Because the memory layouts and dataflows in these architectures are optimized for the access patterns during inference, however, they do not efficiently support the emerging sparse training techniques. In this paper, we demonstrate (a) that accelerating sparse training requires a co-design approach where algorithms are adapted to suit the constraints of hardware, and (b) that hardware for sparse DNN training must tackle constraints that do not arise in inference accelerators. As proof of concept, we adapt a sparse training algorithm to be amenable to hardware acceleration; we then develop dataflow, data layout, and load-balancing techniques to accelerate it. The resulting system is a sparse DNN training accelerator that produces pruned models with the same accuracy as dense models without first training, then pruning, and finally retraining, a dense model. Compared to training the equivalent unpruned models using a state-of-the-art DNN accelerator without sparse training support, Procrustes consumes up to 3.26× less energy and offers up to 4× speedup across a range of models, while pruning weights by an order of magnitude and maintaining unpruned accuracy.
Fog Computing: Platform and Applications Despite the broad utilization of cloud computing, some applications and services still cannot benefit from this popular computing paradigm due to inherent problems of cloud computing such as unacceptable latency, lack of mobility support and location-awareness. As a result, fog computing, has emerged as a promising infrastructure to provide elastic resources at the edge of network. In this paper, we have discussed current definitions of fog computing and similar concepts, and proposed a more comprehensive definition. We also analyzed the goals and challenges in fog computing platform, and presented platform design with several exemplar applications. We finally implemented and evaluated a prototype fog computing platform.
Design-oriented estimation of thermal noise in switched-capacitor circuits. Thermal noise represents a major limitation on the performance of most electronic circuits. It is particularly important in switched circuits, such as the switched-capacitor (SC) filters widely used in mixed-mode CMOS integrated circuits. In these circuits, switching introduces a boost in the power spectral density of the thermal noise due to aliasing. Unfortunately, even though the theory of nois...
Modular software-defined radio In view of the technical and commercial boundary conditions for software-defined radio (SDR), it is suggestive to reconsider the concept anew from an unconventional point of view. The organizational principles of signal processing (rather than the signal processing algorithms themselves) are the main focus of this work on modular software-defined radio. Modularity and flexibility are just two key characteristics of the SDR environment which extend smoothly into the modeling of hardware and software. In particular, the proposed model of signal processing software includes irregular, connected, directed, acyclic graphs with random node weights and random edges. Several approaches for mapping such software to a given hardware are discussed. Taking into account previous findings as well as new results from system simulations presented here, the paper finally concludes with the utility of pipelining as a general design guideline for modular software-defined radio.
Model Predictive Climate Control of a Swiss Office Building: Implementation, Results, and Cost-Benefit Analysis This paper reports the final results of the predictive building control project OptiControl-II that encompassed seven months of model predictive control (MPC) of a fully occupied Swiss office building. First, this paper provides a comprehensive literature review of experimental building MPC studies. Second, we describe the chosen control setup and modeling, the main experimental results, as well as simulation-based comparisons of MPC to industry-standard control using the EnergyPlus simulation software. Third, the costs and benefits of building MPC for cases similar to the investigated building are analyzed. In the experiments, MPC controlled the building reliably and achieved a good comfort level. The simulations suggested a significantly improved control performance in terms of energy and comfort compared with the previously installed industry-standard control strategy. However, for similar buildings and with the tools currently available, the required initial investment is likely too high to justify the deployment in everyday building projects on the basis of operating cost savings alone. Nevertheless, development investments in an MPC building automation framework and a tool for modeling building thermal dynamics together with the increasing importance of demand response and rising energy prices may push the technology into the net benefit range.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.011001
0.01
0.01
0.01
0.01
0.00553
0.004753
0.002739
0.000493
0.000019
0
0
0
0
Adaptive Event-Triggered Output Feedback for Nonlinear Systems With Unknown Polynomial-of-Output Growth Rate This paper investigates global stabilization via adaptive event-triggered output feedback for a class of uncertain nonlinear systems. Typically, unknown polynomial-function rate is admitted in the unmeasurable-state dependent growth of the systems. This calls for an advanced compensation strategy based on dynamic high gain, which in turn requires more intelligent execution in the event-triggered control architecture. To this end, a novel event-triggering mechanism is designed with two events separately evaluating the behaviors of dynamic gain and the controller signal. Particularly, the event on controller signal is enforced to suspend for a certain time after each execution to guarantee a positive lower bound for the inter-execution intervals. More importantly, the suspension time and the threshold therein are both online adjusted according to dynamic gain (rather than pre-specified), which could become small enough as the dynamic gain increases. This ensures timely execution for the effectiveness of adaptive compensation. Then, with the dynamic gain delicately designed to counteract the influence of the execution error, an event-triggered controller via adaptive output feedback is proposed to make the original system states and observer states converge to zero. Further attempt is performed for more efficient resource saving and disturbance tolerance.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Fundamental control algorithms in mobile networks In this work we propose simple and efficient protocols forcounting and leader election in mobile networks. For mobilenetworks with fixed base stations we provide a new andvery efficient protocol for counting the number of mobilehosts. The main part of the work concentrates on ad-hocnetworks (no fixed subnetwork). We provide a model forthese networks and leader election (and a special form ofcounting) protocols for both named and anonymous mobilehosts. In this work we define two protocol ...
Slf-stabiliezing leader election in dynamic networks Three silent self-stabilizing asynchronous distributed algorithms are given for the leader election problem in a dynamic network with unique IDs, using the composite model of computation. A leader is elected for each connected component of the network. A BFS tree is also constructed in each component, rooted at the leader. This election takes O(Diam) rounds, where Diam is the maximum diameter of any component. Links and processes can be added or deleted, and data can be corrupted. After each such topological change or data corruption, the leader and BFS tree are recomputed if necessary. All three algorithms work under the unfair daemon. The three algorithms differ in their leadership stability. The first algorithm, which is the fastest in the worst case, chooses an arbitrary process as the leader. The second algorithm chooses the process of highest priority in each component, where priority can be defined in a variety of ways. The third algorithm has the strictest leadership stability. If the configuration is legitimate, and then any number of topological faults occur at the same time but no variables are corrupted, the third algorithm will converge to a new legitimate state in such a manner that no process changes its choice of leader more than once, and each component will elect a process which was a leader before the fault, provided there is at least one former leader in that component.
Unifying stabilization and termination in message-passing systems The paper dispels the myth that it is impossible for a message-passing program to be both terminating and stabilizing. We consider a rather general notion of termination: a terminating program eventually stops its execution after the environment ceases lo provide input. We identify termination-symmetry to be a necessary condition for a problem to admit a sollution with such properties. Our results do confirm that a number of well-known problems (e.g., consensus, leader election) do not allow a terminating and stabilizing solution. On the flip side, they show that other problems such as mutual exclusion and reliable-transmission allow such solutions. We present a message-passing solution to the mutual exclusion problem that is both stabilizing and terminating. We also desctibe an approach of adding termination to a stabilizing program. To illustrate this approach, we add termination to a stabilizing solution for the reliable transmission problem.
A Mobility Based Metric for Clustering in Mobile Ad Hoc Networks Abstract: This paper presents a novel relative mobility metric for mobile ad hoc networks (MANETs). It is based on the ratio of power levels due to successive receptions at each node from its neighbors. We propose a distributed clustering algorithm, MOBIC, based on the use of this mobility metric for selection of clusterheads, and demonstrate that it leads to more stable cluster formation than the "least clusterhead change" version of the well known Lowest-ID clustering algorithm [3]. We show reduction of as much as 33% in the rate of clusterhead changes owing to the use of the proposed technique. In a MANET that uses scalable cluster-based services, network performance metrics such as throughput and delay are tightly coupled with the frequency of cluster reorganization. Therefore, we believe that using MOBIC can result in a more stable configuration, and thus yield better performance.
Regional consecutive leader election in mobile ad-hoc networks In this paper we introduce the regional consecutive leader election (RCLE) problem, which extends the classic leader election problem to the continuously-changing environment of mobile ad-hoc networks. We assume that mobile nodes, including the currently elected leader, can fail by crashing, and might enter or exit the region of interest at any time. We require the existence of certain paths that ensures a bound on the time for propagation of information within the region. We present and prove correct an algorithm that solves RCLE for a fixed region in 2 or 3-dimensional space. Our algorithm does not rely on the knowledge of the total number of nodes in the system nor on a common startup time. In the second part of the paper, we introduce a condition on mobility that is sufficient to ensure the existence of the paths required by our RCLE algorithm.
Optimal regional consecutive leader election in mobile ad-hoc networks The regional consecutive leader election (RCLE) problem requires mobile nodes to elect a leader within bounded time upon entering a specific region. We prove that any algorithm requires Ω(Dn) rounds for leader election, where D is the diameter of the network and n is the total number of nodes. We then present a fault-tolerant distributed algorithm that solves the RCLE problem and works even in settings where nodes do not have access to synchronized clocks. Since nodes set their leader variable within O(Dn) rounds, our algorithm is asymptotically optimal with respect to time complexity. Due to its low message bit complexity, we believe that our algorithm is of practical interest for mobile wireless ad-hoc networks. Finally, we present a novel and intuitive constraint on mobility that guarantees a bounded communication diameter among nodes within the region of interest.
Communication-efficient failure detection and consensus in omission environments Failure detectors have been shown to be a very useful mechanism to solve the consensus problem in the crash failure model, for which a number of communication-efficient algorithms have been proposed. In this paper we deal with the definition, implementation and use of communication-efficient failure detectors in the general omission failure model, where processes can fail by crashing and by omitting messages when sending and/or receiving. We first define a new failure detector class for this model in terms of completeness and accuracy properties. Then we propose an algorithm that implements a failure detector of the proposed class in a communication-efficient way, in the sense that only a linear number of links are used to send messages forever. We also explain how the well-known consensus algorithm of Chandra and Toueg can be adapted in order to use the proposed failure detector.
Impossibility of distributed consensus with one faulty process The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the “Byzantine Generals” problem.
Causality, influence, and computation in possibly disconnected synchronous dynamic networks In this work, we study the propagation of influence and computation in dynamic distributed computing systems that are possibly disconnected at every instant. We focus on a synchronous message-passing communication model with broadcast and bidirectional links. Our network dynamicity assumption is a worst-case dynamicity controlled by an adversary scheduler, which has received much attention recently. We replace the usual (in worst-case dynamic networks) assumption that the network is connected at every instant by minimal temporal connectivity conditions. Our conditions only require that another causal influence occurs within every time window of some given length. Based on this basic idea, we define several novel metrics for capturing the speed of information spreading in a dynamic network. We present several results that correlate these metrics. Moreover, we investigate termination criteria in networks in which an upper bound on any of these metrics is known. We exploit our termination criteria to provide efficient (and optimal in some cases) protocols that solve the fundamental counting and all-to-all token dissemination (or gossip) problems.
A Logic-in-Memory Computer If, as presently projected, the cost of microelectronic arrays in the future will tend to reflect the number of pins on the array rather than the number of gates, the logic-in-memory array is an extremely attractive computer component. Such an array is essentially a microelectronic memory with some combinational logic associated with each storage element. A logic-in-memory computer is described that is organized around a logic-enhanced ``cache'' memory array. Used as a cache, a logic-in-memory array performs as a high-speed buffer between a conventional CPU and a conventional memory. The effect on the computer system of the cache and its control mechanism is to make the main memory appear to have all of the processing capabilities and almost the same performance as the cache. Operations within the array are naturally organized as operations on blocks of data called ``sectors.'' Among the operations that can be performed are arithmetic and logical operations on pairs of elements from two sectors, and a variety of associative search operations on a single sector. For such operations, the main memory of the computer appears to the program to be composed of a collection of logic-in-memory arrays, each the size of a sector. Because of the high-speed, highly parallel sector operations, the logic-in-memory computer points to a new direction for achieving orders of magnitude increase in computer performance. Moreover, since the computer is specifically organized for large-scale integration, the increased performance might be obtained for a comparatively small dollar cost.
Design Considerations for a Direct RF Sampling Mixer This brief presents a detailed time-domain and frequency-domain analysis of a direct RF sampling mixer. Design considerations such as incomplete charge sharing and large signal nonlinearity are addressed. An accurate frequency-domain transfer function is derived. Estimation of noise figure is given. The analysis applies to the design of sub-sampling mixers that have become important for software-d...
Interactive presentation: An FPGA based all-digital transmitter with radio frequency output for software defined radio In this paper, we present the architecture and implementation of an all-digital transmitter with radio frequency output targeting an FPGA device. FPGA devices have been widely adopted in the applications of digital signal processing (DSP) and digital communication. They are typically well suited for the evolving technology of software defined radios (SDR) due to their reconfigurability and programmability. However, FPGA devices are mostly used to implement digital baseband and intermediate frequency (IF) functionalities. Therefore, significant analog and RF components are still needed to fulfill the radio communication requirements. The all-digital transmitter presented in this paper directly synthesizes RF signal in the digital domain, therefore eliminates the need for most of the analog and RF components. The all-digital transmitter consists of one QAM modulator and one RF pulse width modulator (RFPWM). The binary output waveform from RFPWM is centered at 800MHz with 64QAM signaling format. The entire transmitter is implemented using Xilinx Virtex2pro device with on chip multi-gigabit transceiver (MGT). The adjacent channel leakage ratio (ACLR) measured in the 20 MHz passband is 45dB, and the measured error vector magnitude (EVM) is less than 1%. Our work extends the digital implementation of communication applications on an FPGA platform to radio frequency, therefore making a significant evolution towards an ideal SDR.
P2P-Based Service Distribution over Distributed Resources Dynamic or demand-driven service deployment in a Grid or Cloud environment is an important issue considering the varying nature of demand. Most distributed frameworks either offer static service deployment which results in resource allocation problems, or, are job-based where for each invocation, the job along with the data has to be transferred for remote execution resulting in increased communication cost. An alternative approach is dynamic demand-driven provisioning of services as proposed in earlier literature, but the proposed methods fail to account for the volatility of resources in a Grid environment. In this paper, we propose a unique peer-to-peer based approach for dynamic service provisioning which incorporates a Bit-Torrent like protocol for provisioning the service on a remote node. Being built around a P2P model, the proposed framework caters to resource volatility and also incurs lower provisioning cost.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.012177
0.015541
0.013681
0.012699
0.009663
0.006562
0.001565
0.000239
0.000007
0
0
0
0
0
Data Reduction Model for Balancing Indexing and Securing Resources in the Internet-of-Things Applications Evolution of the Internet of Things (IoT) makes a revolution in connecting, monitoring, controlling, and managing things, objects, and almost surroundings through the Internet. To reveal the potential of IoT, rich knowledge has to be extracted, indexed, and shared securely in real time. Recent comprehensive researches on IoT spot the light on main correlative challenges, such as security, scalability, heterogeneity, and big data. Due to the heterogeneity of IoT applications that produce a large volume of a variety of data streams in real time, mining, securing, and analyzing IoT data become tedious and challenging tasks. Indexing sensory data is one of data mining techniques, which ease information retrieval. But ordinary indexing methods are not fit with such massive and dynamic data; where indexes become out-of-date once they are built. Clustering, data reduction, and summarization present promising solutions for enabling low-power security and balanced indexing. This article presents a novel method for dynamic data reduction and summarization using dynamic time warping (DTW), which also presents a balanced architecture for enabling balanced indexing based on similarity data fusion. Data reduction-based prediction models enable real-time search and secure discovery for Smart Things (SThs). The results of the proposed model were proved using real examples and data sets. Using the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Szeged-weather</i> data set similar SThs data is reduced by 95%. Thus, indexes sizes could be reduced, and using smart scheduling, crawling cycle length could be expanded.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Event-Triggered Synchronization of Multiple Discrete-Time Markovian Jump Memristor- Based Neural Networks With Mixed Mode-Dependent Delays This paper deals with global synchronization problem of multiple discrete-time Markovian jump memristor-based neural networks (DTMJMNNs) with mixed mode-dependent delays via a novel event-triggered impulsive coupling control (ETICC). The parameters of the multiple DTMJMNNs and the mixed time delays (both discrete and distributed delays) switch randomly according to a Markov chain. In the ETICC strategy, the controller does not work all the time, but only works at impulse instants determined by specific events. In particular, the coupling matrix can be non-Laplacian. By using the Lyapunov stability theory, linear matrix inequalities (LMIs), and the Kronecker product, some sufficient conditions for global synchronization of multiple DTMJMNNs under the event-triggered strategy are derived. Two examples are presented to test the validity of the theoretical analysis results.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Scalable Fault-Tolerant Aggregation in Large Process Groups Abstract: This paper discusses fault-tolerant, scalable solutions to the problem of accurately and scalably calculating global aggregate functions in large process groups communicating over unreliable networks. These groups could represent sensors or processes communicating over a network that is either fixed (e.g., the Internet) or dynamic (eg., multihop ad-hoc). Group members are prone to failures. The ability to evaluate global aggregate properties (eg., the average of sensor temperature readings) is important for higher-level coordination activities in such large groups. We first define the setting and problem, laying down metrics to evaluate different algorithms for the same. We discuss why the usual approaches to solve this problem are unviable and unscalable over an unreliable network prone to message delivery failures and crash failures. We then propose a technique to impose an abstract hierarchy on such large groups, describing how this hierarchy can be made to mirror the network topology. We discuss several alternatives to use this technique to solve the global aggregate function evaluation problem. Finally, we present a protocol based on gossiping that uses this hierarchical technique. We present mathematical analysis and performance results to validate the robustness, efficiency and accuracy of the Hierarchical Gossiping algorithm.
A survey on routing protocols for wireless sensor networks Recent advances in wireless sensor networks have led to many new protocols specifically designed for sensor networks where energy awareness is an essential consideration. Most of the attention, however, has been given to the routing protocols since they might differ depending on the application and network architecture. This paper surveys recent routing protocols for sensor networks and presents a classification for the various approaches pursued. The three main categories explored in this paper are data-centric, hierarchical and location-based. Each routing protocol is described and discussed under the appropriate category. Moreover, protocols using contemporary methodologies such as network flow and quality of service modeling are also discussed. The paper concludes with open research issues.
Synopsis diffusion for robust aggregation in sensor networks Abstract Aggregating sensor readings within the network is an essen - tial technique for conserving energy in sensor networks Pre - vious work proposes aggregating along a tree overlay topol - ogy in order to conserve energy However, a tree overlay is very fragile, and the high rate of node and link failures in sensor networks often results in a large fraction of readings being unaccounted for in the aggregate Value splitting on multi - path overlays, as proposed in TAG, reduces the vari - ance in the error, but still results in signi cant errors Pre - vious approaches are fragile, fundamentally, because they tightly couple aggregate computation and message routing In this paper, we propose a family of aggregation techniques, called synopsis diffusion , that decouples the two, enabling aggregation algorithms and message routing to be optimized independently As a result, the level of redundancy in mes - sage routing (as a trade - off with energy consumption) can be adapted to both expected and encountered network condi - tions We present a number of concrete examples of synopsis diffusion algorithms, including a broadcast - based instantia - tion of synopsis diffusion that is as energy ef cient as a tree, but dramatically more robust
Robust Aggregation in Sensor Networks In the emerging area of sensor-based systems, a significant c hallenge is to develop scalable, fault-tolerant methods to extract useful information from the data the sensors coll ect. An approach to this data management problem is the use of sensor "database" systems, which allow users to perfo rm aggregation queries on the readings of a sensor network. Due to power and range constraints, centralized ap proaches are generally impractical, so most systems use in-network aggregation to reduce network traffic. Howev er, these aggregation strategies become bandwidth- intensive when combined with the fault-tolerant, multi-pa th routing methods often used in these environments. In order to avoid this expense, we investigate the use ofapproximate in-network aggregation using small sketches and we survey robust and scalable methods for computing dupl icate-sensitive aggregates.
Directed diffusion for wireless sensor networking Advances in processor, memory, and radio technology will enable small and cheap nodes capable of sensing, communication, and computation. Networks of such nodes can coordinate to perform distributed sensing of environmental phenomena. In this paper, we explore the directed-diffusion paradigm for such coordination. Directed diffusion is data-centric in that all communication is for named data. All nodes in a directed-diffusion-based network are application aware. This enables diffusion to achieve energy savings by selecting empirically good paths and by caching and processing data in-network (e.g., data aggregation). We explore and evaluate the use of directed diffusion for a simple remote-surveillance sensor network analytically and experimentally. Our evaluation indicates that directed diffusion can achieve significant energy savings and can outperform idealized traditional schemes (e.g., omniscient multicast) under the investigated scenarios.
Geographic Gossip: Efficient Averaging for Sensor Networks Gossip algorithms for distributed computation are attract ive due to their simplicity, distributed nature, and robust ness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repea tedly recirculating redundant information. For realistic senso r network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing t imes of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of n and p n respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy ǫ using O( n 1.5 p log n log ǫ 1) radio transmissions, which yields a q n log n factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental
Randomized gossip algorithms Motivated by applications to sensor, peer-to-peer, and ad hoc networks, we study distributed algorithms, also known as gossip algorithms, for exchanging information and for computing in an arbitrarily connected network of nodes. The topology of such networks changes continuously as new nodes join and old nodes leave the network. Algorithms for such networks need to be robust against changes in topology. Additionally, nodes in sensor networks operate under limited computational, communication, and energy resources. These constraints have motivated the design of "gossip" algorithms: schemes which distribute the computational burden and in which a node communicates with a randomly chosen neighbor. We analyze the averaging problem under the gossip constraint for an arbitrary network graph, and find that the averaging time of a gossip algorithm depends on the second largest eigenvalue of a doubly stochastic matrix characterizing the algorithm. Designing the fastest gossip algorithm corresponds to minimizing this eigenvalue, which is a semidefinite program (SDP). In general, SDPs cannot be solved in a distributed fashion; however, exploiting problem structure, we propose a distributed subgradient method that solves the optimization problem over the network. The relation of averaging time to the second largest eigenvalue naturally relates it to the mixing time of a random walk with transition probabilities derived from the gossip algorithm. We use this connection to study the performance and scaling of gossip algorithms on two popular networks: Wireless Sensor Networks, which are modeled as Geometric Random Graphs, and the Internet graph under the so-called Preferential Connectivity (PC) model.
Extermal cover times for random walks on trees
An Inexact Dual Fast Gradient-Projection Method for Separable Convex Optimization with Linear Coupled Constraints. In this paper, a class of separable convex optimization problems with linear coupled constraints is studied. According to the Lagrangian duality, the linear coupled constraints are appended to the objective function. Then, a fast gradient-projection method is introduced to update the Lagrangian multiplier, and an inexact solution method is proposed to solve the inner problems. The advantage of our proposed method is that the inner problems can be solved in an inexact and parallel manner. The established convergence results show that our proposed algorithm still achieves optimal convergence rate even though the inner problems are solved inexactly. Finally, several numerical experiments are presented to illustrate the efficiency and effectiveness of our proposed algorithm.
Information-driven dynamic sensor collaboration This article overviews the information-driven approach to sensor collaboration in ad hoc sensor networks. The main idea is for a network to determine participants in a "sensor collaboration" by dynamically optimizing the information utility of data for a given cost of communication and computation. A definition of information utility is introduced, and several approximate measures of the information utility are developed for reasons of computational tractability. We illustrate the use of this approach using examples drawn from tracking applications
A filtering technique to lower LC oscillator phase noise Based on a physical understanding of phase-noise mechanisms, a passive LC filter is found to lower the phase-noise factor in a differential oscillator to its fundamental minimum. Three fully integrated LC voltage-controlled oscillators (VCOs) serve as a proof of concept. Two 1.1-GHz VCOs achieve -153 dBc/Hz at 3 MHz offset, biased at 3.7 mA from 2.5 V. A 2.1-GHz VCO achieves -148 dBc/Hz at 15 MHz offset, taking 4 mA from a 2.7-V supply. All oscillators use fully integrated resonators, and the first two exceed discrete transistor modules in figure of merit. Practical aspects and repercussions of the technique are discussed
A 250 mV 7.5 μW 61 dB SNDR SC ΔΣ Modulator Using Near-Threshold-Voltage-Biased Inverter Amplifiers in 130 nm CMOS An ultra-low voltage switched-capacitor (SC) ΔΣ converter running at a record low supply voltage of only 250 mV is introduced. System level aspects are discussed and special circuit techniques described, that enable robust operation at such a low supply voltage. Using a SC biasing approach, inverter-based integrators are realized with overdrives close to the transistor threshold voltage Vth while compensating for process, voltage and temperature (PVT) variation. Biasing voltages are generated on-chip using a novel level shifting circuit, that overcomes headroom limitations due to saturation voltage Vsat. With an oversampling ratio (OSR) of 70 and a sampling frequency (fS) of 1.4 MHz at 250 mV power supply the converter achieves 61 dB SNDR in 10 kHz bandwidth while consuming a total power of 7.5 μW.
A 10-Bit 800-MHz 19-mW CMOS ADC A pipelined ADC employs charge-steering op amps to relax the trade-offs among speed, noise, and power consumption. Applying full-rate nonlinearity and gain error calibration, a prototype realized in 65-nm CMOS technology achieves an SNDR of 52.2 dB at an input frequency of 399.2MHz and an FoM of 53 fJ/conversion-step.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.041196
0.035194
0.034599
0.03144
0.026949
0.016427
0.008016
0.000246
0.000008
0
0
0
0
0
A 12 mV Input, 90.8% Peak Efficiency CRM Boost Converter With a Sub-Threshold Startup Voltage for TEG Energy Harvesting. This paper proposed a high efficiency boost converter targeting thermoelectric generator energy harvesting. The proposed converter adopts the critical conduction mode rather than the discontinuous mode to reduce the conduction loss, which can improve the peak efficiency at high input power. To reduce the minimum input voltage, an adaptive on-resistance switch, which can automatically change the hi...
A modified Karnaugh map technique A new self-documenting method of constructing Karnaugh maps that assigns a unique identifier to each element in a Boolean minterm expression and uses these identifiers to construct the map is discussed. This method provides an immediate "audit trial" for the map's creation and facilitates the teaching of Karnaugh maps by including enough information within the map to show the exact method used to construct the map. During the grading process, this information enables the teacher to better assess a student's level of understanding of the Karnaugh map technique by highlighting exactly where errors were made. It also enhances a student's understanding of Karnaugh map construction during a lecture.
A Hybrid Threshold Self-Compensation Rectifier For Rf Energy Harvesting This paper presents a novel highly efficient 5-stage RF rectifier in SMIC 65 nm standard CMOS process. To improve power conversion efficiency (PCE) and reduce the minimum input voltage, a hybrid threshold self-compensation approach is applied in this proposed RF rectifier, which combines the gate-bias threshold compensation with the body-effect compensation. The proposed circuit uses PMOSFET in all the stages except for the first stage to allow individual body-bias, which eliminates the need for triple-well technology. The presented RF rectifier exhibits a simulated maximum PCE of 30% at -16.7dBm (20.25 mu W) and produces 1.74V across 0.5 M Omega load resistance. In the circumstances of 1 M Omega load resistance, it outputs 1.5 V DC voltage from a remarkably low input power level of -20.4 dBm (9 mu W) RF input power with PCE of about 25%.
A P&O MPPT With a Novel Analog Power-Detector for WSNs Applications This brief presents a perturb and observe (P&O) maximum power point tracking (MPPT) with a novel analog power detector for wireless sensor nodes (WSNs) applications. The proposed analog power detector can judge the output power variation only by the voltage measurements, which eliminates the use of current measurements or microcontroller unit (MCU), so that its complexity and power consumption are greatly reduced. The proposed P&O MPPT circuit with the analog power detector has the advantages of simple structure, low power consumption, and high power efficiency. The proposed MPPT is used in a discontinuous conduction mode (DCM) buck-boost converter with fixed duty cycle to improve the efficiency performance. This design has been implemented in a <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$0.18\mu \text{m}$ </tex-math></inline-formula> CMOS process occupying an active area of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$0.98\times 0.9$ </tex-math></inline-formula> mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> . The input voltage range can be from 2V to 7.2V. The peak conversion efficiency and peak tracking efficiency are 86% and 98% respectively, with power consumption about <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$9\mu \text{W}$ </tex-math></inline-formula> .
Interference Robust Detector-First Near-Zero Power Wake-Up Receiver This paper presents the development of a wake-up receiver (WuRX) at nanowatt power levels for event-driven applications. This paper improves the state of the art, obtaining higher sensitivity than previous work in the 151.8- and 433-bands, low-power operation, and robustness to interference due to an integrated offset compensation algorithm operating without any external calibration. Simultaneous low-power operation and high sensitivity are achieved through a passive detector design based upon a terminal impedance boundary condition-based optimization of the detector dictated by the terminal impedances of the detector. This paper is implemented in a 130-nm CMOS process and obtains −76 dBm at the 151.8-MHz multi-use radio service (MURS) band and −71 dBm at the 433-MHz Industrial, Scientific and Medical (ISM) band with a total dc power draw of just 7.6 nW from 1.0- and 0.6-V supplies.
Impossibility of distributed consensus with one faulty process The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the “Byzantine Generals” problem.
Ad-hoc On-Demand Distance Vector Routing This paper describes work carried out as part of the GUIDE project at Lancaster University. The overall aim of the project is to develop a context-sensitive tourist guide for visitors to the city of Lancaster. Visitors are equipped with portable GUIDE ...
A Linear Representation of Dynamics of Boolean Networks A new matrix product, called semi-tensor product of matrices, is reviewed. Using it, a matrix expression of logic is proposed, where a logical variable is expressed as a vector, a logical function is expressed as a multiple linear mapping. Under this framework, a Boolean network equation is converted into an equivalent algebraic form as a conventional discrete-time linear system. Analyzing the transition matrix of the linear system, formulas are obtained to show a) the number of fixed points; b) the numbers of cycles of different lengths; c) transient period, for all points to enter the set of attractors; and d) basin of each attractor. The corresponding algorithms are developed and used to some examples.
The geometry of innocent flesh on the bone: return-into-libc without function calls (on the x86) We present new techniques that allow a return-into-libc attack to be mounted on x86 executables that calls no functions at all. Our attack combines a large number of short instruction sequences to build gadgets that allow arbitrary computation. We show how to discover such instruction sequences by means of static analysis. We make use, in an essential way, of the properties of the x86 instruction set.
Beyond Stack Smashing: Recent Advances in Exploiting Buffer Overruns This article describes three powerful general-purpose families of exploits for buffer overruns: arc injection, pointer subterfuge, and heap smashing. These new techniques go beyond the traditional "stack smashing" attack and invalidate traditional assumptions about buffer overruns.
Fuzzy regulators and fuzzy observers: relaxed stability conditions and LMI-based designs This paper presents new relaxed stability conditions and LMI- (linear matrix inequality) based designs for both continuous and discrete fuzzy control systems. They are applied to design problems of fuzzy regulators and fuzzy observers. First, Takagi and Sugeno's fuzzy models and some stability results are recalled. To design fuzzy regulators and fuzzy observers, nonlinear systems are represented by Takagi-Sugeno's (TS) fuzzy models. The concept of parallel distributed compensation is employed to design fuzzy regulators and fuzzy observers from the TS fuzzy models. New stability conditions are obtained by relaxing the stability conditions derived in previous papers, LMI-based design procedures for fuzzy regulators and fuzzy observers are constructed using the parallel distributed compensation and the relaxed stability conditions. Other LMI's with respect to decay rate and constraints on control input and output are also derived and utilized in the design procedures. Design examples for nonlinear systems demonstrate the utility of the relaxed stability conditions and the LMI-based design procedures
Recurrent-Fuzzy-Neural-Network-Controlled Linear Induction Motor Servo Drive Using Genetic Algorithms A recurrent fuzzy neural network (RFNN) controller based on real-time genetic algorithms (GAs) is developed for a linear induction motor (LIM) servo drive in this paper. First, the dynamic model of an indirect field-oriented LIM servo drive is derived. Then, an online training RFNN with a backpropagation algorithm is introduced as the tracking controller. Moreover, to guarantee the global convergence of tracking error, a real-time GA is developed to search the optimal learning rates of the RFNN online. The GA-based RFNN control system is proposed to control the mover of the LIM for periodic motion. The theoretical analyses for the proposed GA-based RFNN controller are described in detail. Finally, simulated and experimental results show that the proposed controller provides high-performance dynamic characteristics and is robust with regard to plant parameter variations and external load disturbance
3.4 A 36Gb/s PAM4 transmitter using an 8b 18GS/S DAC in 28nm CMOS At data rates beyond 10Gb/s, most wireline links employ NRZ signaling. Serial NRZ links as high as 56Gb/s and 60Gb/s have been reported [1]. Nevertheless, as the rate increases, the constraints imposed by the channel, package, and die become more severe and do not benefit from process scaling in the same fashion that circuit design does. Reflections from impedance discontinuities in the PCB and package caused by vias and connectors introduce significant signal loss and distortions at higher frequencies. Even with an ideal channel, at every package-die interface, there is an intrinsic parasitic capacitance due to the pads and the ESD circuit amounting to at least 150fF, and a 50Ω resistor termination at both the transmit and receive ends resulting in an intrinsic pole at 23GHz or lower. In light of all these limitations, serial NRZ signaling beyond 60Gb/s appears suboptimal in terms of both power and performance. Utilizing various modulation techniques such as PAM4, one can achieve a higher spectral efficiency [2]. To enable such transmission formats, high-speed moderate-resolution data converters are required. This paper describes a 36Gb/s transmitter based on an 18GS/s 8b DAC implemented in 28nm CMOS, compliant to the new IEEE802.3bj standard for 100G Ethernet over backplane and copper cables [3].
A VCO-Based Nonuniform Sampling ADC Using a Slope-Dependent Pulse Generator This paper presents a voltage-controlled oscillator (VCO)-based nonuniform sampling analog-to-digital converter (ADC) as an alternative to the level-crossing (LC)-based converters for digitizing biopotential signals. This work aims to provide a good signal-to-noise-and-distortion ratio at a low average sampling rate. In the proposed conversion method, a slope-dependent pulse generation block is used to provide a variable sample rate adjusted according to the input signal's slope. Simulation results show that the introduced method meets a target reconstruction quality with a sampling rate approaching 92 Sps, while on the same MIT-BIH Arrhythmia N 106 ECG benchmark, the classic LC-based approach requires a sampling rate higher than 500 Sps. The benefits of the proposed method are more remarkable when the input signal is very noisy. The proposed ADC achieves a compression ratio close to 4, but with only 5.4% root-mean-square difference when tested using the MIT-BIH Arrhythmia Database.
1.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
0
0
Thinking Like a Vertex: A Survey of Vertex-Centric Frameworks for Large-Scale Distributed Graph Processing The vertex-centric programming model is an established computational paradigm recently incorporated into distributed processing frameworks to address challenges in large-scale graph processing. Billion-node graphs that exceed the memory capacity of commodity machines are not well supported by popular Big Data tools like MapReduce, which are notoriously poor performing for iterative graph algorithms such as PageRank. In response, a new type of framework challenges one to “think like a vertex” (TLAV) and implements user-defined programs from the perspective of a vertex rather than a graph. Such an approach improves locality, demonstrates linear scalability, and provides a natural way to express and compute many iterative graph algorithms. These frameworks are simple to program and widely applicable but, like an operating system, are composed of several intricate, interdependent components, of which a thorough understanding is necessary in order to elicit top performance at scale. To this end, the first comprehensive survey of TLAV frameworks is presented. In this survey, the vertex-centric approach to graph processing is overviewed, TLAV frameworks are deconstructed into four main components and respectively analyzed, and TLAV implementations are reviewed and categorized.
k2-Trees for Compact Web Graph Representation This paper presents a Web graph representation based on a compact tree structure that takes advantage of large empty areas of the adjacency matrix of the graph. Our results show that our method is competitive with the best alternatives in the literature, offering a very good compression ratio (3.3---5.3 bits per link) while permitting fast navigation on the graph to obtain direct as well as reverse neighbors (2---15 microseconds per neighbor delivered). Moreover, it allows for extended functionality not usually considered in compressed graph representations.
Accelerating sparse matrix-vector multiplication on GPUs using bit-representation-optimized schemes The sparse matrix-vector (SpMV) multiplication routine is an important building block used in many iterative algorithms for solving scientific and engineering problems. One of the main challenges of SpMV is its memory-boundedness. Although compression has been proposed previously to improve SpMV performance on CPUs, its use has not been demonstrated on the GPU because of the serial nature of many compression and decompression schemes. In this paper, we introduce a family of bit-representation-optimized (BRO) compression schemes for representing sparse matrices on GPUs. The proposed schemes, BRO-ELL, BRO-COO, and BRO-HYB, perform compression on index data and help to speed up SpMV on GPUs through reduction of memory traffic. Furthermore, we formulate a BRO-aware matrix reodering scheme as a data clustering problem and use it to increase compression ratios. With the proposed schemes, experiments show that average speedups of 1.5× compared to ELLPACK and HYB can be achieved for SpMV on GPUs.
Compression-aware graph computation. Many recent work has focused on graph algorithms via parallelization including PowerGraph [9] and Ligra [14]. The frameworks process large graphs in shared memory, requiring a terabyte of memory and expensive maintenance cost. Reducing graph size to fit in memory thus is crucial in cutting the cost of large-scale graph computation. Compression has been widely used to reduce graph size. However, it could meanwhile compromise graph computation efficiency caused by nontrivial decompression overhead before graph computation. In this paper, we propose a simple and yet efficient coding scheme. It not only leads to smaller size of compressed graphs; meanwhile we can perform graph computation directly on the compressed graphs with no or partial decompression, namely compression-aware computation, leading to faster running time. Our experiments validate that the coding scheme achieves 2.99X compression ratio, and three compression-aware graph algorithms achieve 7.02X, 2.88X and 2.34X faster running time than the graph algorithms on the graphs without compression.
Implementing Push-Pull Efficiently In Graphblas We factor Beamer's push-pull, also known as direction-optimized breadth-first-search (DOBFS) into 3 separable optimizations, and analyze them for generalizability, asymptotic speedup, and contribution to overall speedup. We demonstrate that masking is critical for high performance and can be generalized to all graph algorithms where the sparsity pattern of the output is known a priori. We show that these graph algorithm optimizations, which together constitute DOBFS, can be neatly and separably described using linear algebra and can be expressed in the GraphBLAS linear-algebrabased framework. We provide experimental evidence that with these optimizations, a DOBFS expressed in a linear-algebra-based graph framework attains competitive performance with state-of-the-art graph frameworks on the GPU and on a multi-threaded CPU, achieving 101 GTEPS on a Scale 22 RMAT graph.
A fast GPU algorithm for graph connectivity Graphics processing units provide a large compu- tational power at a very low price which position them as an ubiquitous accelerator. General purpose programming on the graphics processing units (GPGPU) is best suited for regular data parallel algorithms. They are not directly amenable for algorithms which have irregular data access patterns such as list ranking, and finding the connected components of a graph, and the like. In this work, we present a GPU-optimized implementation for finding the connected components of a given graph. Our implementation tries to minimize the impact of irregularity, both at the data level and functional level. Our implementation achieves a speed up of 9 to 12 times over the best sequential CPU implementation. For instance, our implementation finds connected components of a graph of 10 million nodes and 60 million edges in about 500 milliseconds on a GPU, given a random edge list. We also draw interesting observations on why PRAM algorithms, such as the Shiloach- Vishkin algorithm may not be a good fit for the GPU and how they should be modified.
Two Fast Algorithms for Sparse Matrices: Multiplication and Permuted Transposition Let A and B be two sparse matrices whose orders are p by q and q by r. Their product C -- AB requires N nontrlvial multiplications where 0 <_ N <_ pqr. The operation count of our algorithm is usually proportional to N; however, its worse case is O(p, r, NA, N) where NA is the number of elements in A This algorithm can be used to assemble the sparse matrix arising from a finite element problem from the basic elements, using ~-1 [order (g)]2 operations where m is the total number of basic elements and order(g) is the order of the ~th element matrix. The concept of an unordered merge plays a key role m obtaining our fast multiplication algorithm It forces us to accept an unordered sparse row-wise format as output for the product C The permuted transposition algorithm computes (RA) T in O(p, q, NA) operations where R is a permutation matrix It also orders an unordered sparse row-wise representation. We can combine these algorithms to produce an O(M) algorithm to solve Ax = b where M is the number of multiplications needed to factor A into LU
Scheduling Techniques for GPU Architectures with Processing-In-Memory Capabilities. Processing data in or near memory (PIM), as opposed to in conventional computational units in a processor, can greatly alleviate the performance and energy penalties of data transfers from/to main memory. Graphics Processing Unit (GPU) architectures and applications, where main memory bandwidth is a critical bottleneck, can benefit from the use of PIM. To this end, an application should be properly partitioned and scheduled to execute on either the main, powerful GPU cores that are far away from memory or the auxiliary, simple GPU cores that are close to memory (e.g., in the logic layer of 3D-stacked DRAM). This paper investigates two key code scheduling issues in such a GPU architecture that has PIM capabilities, to maximize performance and energy-efficiency: (1) how to automatically identify the code segments, or kernels, to be offloaded to the cores in memory, and (2) how to concurrently schedule multiple kernels on the main GPU cores and the auxiliary GPU cores in memory. We develop two new runtime techniques: (1) a regression-based affinity prediction model and mechanism that accurately identifies which kernels would benefit from PIM and offloads them to GPU cores in memory, and (2) a concurrent kernel management mechanism that uses the affinity prediction model, a new kernel execution time prediction model, and kernel dependency information to decide which kernels to schedule concurrently on main GPU cores and the GPU cores in memory. Our experimental evaluations across 25 GPU applications demonstrate that these two techniques can significantly improve both application performance (by 25% and 42%, respectively, on average) and energy efficiency (by 28% and 27%).
A Fast and High Quality Multilevel Scheme for Partitioning Irregular Graphs Recently, a number of researchers have investigated a class of graph partitioning algorithms that reduce the size of the graph by collapsing vertices and edges, partition the smaller graph, and then uncoarsen it to construct a partition for the original graph (Bui and Jones, Proc. of the 6th SIAM Conference on Parallel Processing for Scientific Computing, 1993, 445-452; Hen- drickson and Leland, A Multilevel Algorithm for Partitioning Graphs, Tech. report SAND 93-1301, Sandia National Laboratories, Albuquerque, NM, 1993). From the early work it was clear that multilevel techniques held great promise; however, it was not known if they can be made to con- sistently produce high quality partitions for graphs arising in a wide range of application domains. We investigate the effectiveness of many different choices for all three phases: coarsening, partition of the coarsest graph, and refinement. In particular, we present a new coarsening heuristic (called heavy-edge heuristic) for which the size of the partition of the coarse graph is within a small factor of the size of the final partition obtained after multilevel refinement. We also present a much faster variation of the Kernighan-Lin (KL) algorithm for refining during uncoarsening. We test our scheme on a large number of graphs arising in various domains including finite element methods, linear pro- gramming, VLSI, and transportation. Our experiments show that our scheme produces partitions that are consistently better than those produced by spectral partitioning schemes in substantially smaller time. Also, when our scheme is used to compute fill-reducing orderings for sparse matrices, it produces orderings that have substantially smaller fill than the widely used multiple minimum degree algorithm.
SimpleScalar: An Infrastructure for Computer System Modeling Designers can execute programs on software models to validate a proposed hardware design's performance and correctness, while programmerscan use these models to develop and test software before the real hardwarebecomes available. Three critical requirements drive the implementationof a software model: performance, flexibility, and detail.Performance determines the amount of workload the model can exercise given the machine resources available for simulation. Flexibility indicates how well the model is structured to simplify modification, permitting design variants or even completely different designs to be modeled with ease. Detail defines the level of abstraction used to implement the model's components.The SimpleScalar tool set provides an infrastructure for simulation and architectural modeling. It can model a variety of platforms ranging from simple unpipelined processors to detailed dynamically scheduled microarchitectures with multiple-level memory hierarchies. SimpleScalar simulators reproduce computing device operations by executing all program instructions using an interpreter.The tool set's instruction inter-complex modern machines and effectively manage the large software projects needed to model such machines. Asim addresses these needs by providing a modular and reusable framework for creating many models. The framework's modularity helps break down the performance-modeling problem into individual pieces that can be modeled separately, while its reusability allows using a software component repeatedly in different contexts.
The emergence of a networking primitive in wireless sensor networks The wireless sensor network community approached networking abstractions as an open question, allowing answers to emerge with time and experience. The Trickle algorithm has become a basic mechanism used in numerous protocols and systems. Trickle brings nodes to eventual consistency quickly and efficiently while remaining remarkably robust to variations in network density, topology, and dynamics. Instead of flooding a network with packets, Trickle uses a "polite gossip" policy to control send rates so each node hears just enough packets to stay consistent. This simple mechanism enables Trickle to scale to 1000-fold changes in network density, reach consistency in seconds, and require only a few bytes of state yet impose a maintenance cost of a few sends an hour. Originally designed for disseminating new code, experience has shown Trickle to have much broader applicability, including route maintenance and neighbor discovery. This paper provides an overview of the research challenges wireless sensor networks face, describes the Trickle algorithm, and outlines several ways it is used today.
A 14 bit 200 MS/s DAC With SFDR > 78 dBc, IM3 < - 83 dBc and NSD < - 163 dBm/Hz Across the Whole Nyquist Band Enabled by Dynamic-Mismatch Mapping. This paper presents a 14 bit 200 MS/s current-steering DAC with a novel digital calibration technique called dynamic-mismatch mapping (DMM). By optimizing the switching sequence of current cells to reduce the dynamic integral nonlinearity in an I-Q domain, the DMM technique digitally calibrates all mismatch errors so that both the DAC static and dynamic performance can be significantly improved in...
A 10/30 MHz Fast Reference-Tracking Buck Converter With DDA-Based Type-III Compensator A 10/30 MHz voltage-mode controlled buck converter with a wide duty-cycle range is presented. A high-accuracy delay-compensated ramp generator using only low-speed comparators but can work up to 70 MHz is proposed. By using a differential difference amplifier (DDA), a new Type-III compensator is proposed to reduce the chip area of the compensator by 60%. Moreover, based on the unique structure of the proposed compensator, an end-point prediction (EPP) scheme is also implemented to achieve fast reference-tracking responses. The converter was fabricated in a 0.13 μm standard CMOS process. It achieves wide duty-cycle ranges of 0.75 and 0.59 when switching at 10 MHz and 30 MHz with peak efficiencies of 91.8% and 86.6%, respectively. The measured maximum output power is 3.6 W with 2.4 V output voltage and 1.5 A load current. With a constant load current of 500 mA, the up-tracking speeds for switching frequencies of 10 MHz and 30 MHz are 1.67 μs/V and 0.67 μs/V, respectively. The down-tracking speeds for 10 MHz and 30 MHz are 4.44 μs/V and 1.56 μs/V, respectively.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.104222
0.1
0.1
0.1
0.1
0.05
0.001667
0.000109
0.000006
0
0
0
0
0
A 1000 fps Vision Chip Based on a Dynamically Reconfigurable Hybrid Architecture Comprising a PE Array Processor and Self-Organizing Map Neural Network This paper proposes a vision chip hybrid architecture with dynamically reconfigurable processing element (PE) array processor and self-organizing map (SOM) neural network. It integrates a high speed CMOS image sensor, three von Neumann-type processors, and a non-von Neumann-type bio-inspired SOM neural network. The processors consist of a pixel-parallel PE array processor with O(N×N) parallelism, a row-parallel row-processor (RP) array processor with O(N) parallelism and a thread-parallel dual-core microprocessor unit (MPU) with O(2) parallelism. They execute low-, mid- and high-level image processing, respectively. The SOM network speeds up high-level processing in pattern recognition tasks by O(N/4×N/4), which improves the chip performance remarkably. The SOM network can be dynamically reconfigured from the PE array to largely save chip area. A prototype chip with a 256 × 256 image sensor, a reconfigurable 64 × 64 PE array processor/16 × 16 SOM network, a 64 × 1 RP array processor and a dual-core 32-bit MPU was implemented in a 0.18 μm CMOS image sensor process. The chip can perform image capture and various-level image processing at a high speed and in flexible fashion. Various complicated applications including M-S functional solution, horizon estimation, hand gesture recognition, face recognition are demonstrated at high speed from several hundreds to >1000 fps.
From few to many: illumination cone models for face recognition under variable lighting and pose We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render驴or synthesize驴images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone (based on Euclidean distance within the image space). We test our face recognition method on 4,050 images from the Yale Face Database B; these images contain 405 viewing conditions (9 poses $\times$ 45 illumination conditions) for 10 individuals. The method performs almost without error, except on the most extreme lighting directions, and significantly outperforms popular recognition methods that do not use a generative model.
Energy efficient parallel neuromorphic architectures with approximate arithmetic on FPGA. In this paper, we present the parallel neuromorphic processor architectures for spiking neural networks on FPGA. The proposed architectures address several critical issues pertaining to efficient parallelization of the update of membrane potentials, on-chip storage of synaptic weights and integration of approximate arithmetic units. The trade-offs between throughput, hardware cost and power overheads for different configurations are thoroughly investigated. Notably, for the application of handwritten digit recognition, a promising training speedup of 13.5x and a recognition speedup of 25.8x are achieved by a parallel implementation whose degree of parallelism is 32. In spite of the 120MHz operating frequency, the 32-way parallel hardware design demonstrates a 59.4x training speedup over the single-thread software program running on a 2.2GHz general purpose CPU. Equally importantly, by leveraging the built-in resilience of the neuromorphic architecture we demonstrate the energy benefit resulted from the use of approximate arithmetic computation. Up to 20% improvement in energy consumption is achieved by integrating approximate multipliers into the system while maintaining almost the same level of recognition rate achieved using standard multipliers. To the best of our knowledge, it is the first time that the approximate computing and parallel processing are applied to FPGA based spiking neural networks. The influence of the parallel processing on the benefits of approximate computing is also discussed in detail.
A Low-Cost High-Speed Neuromorphic Hardware Based on Spiking Neural Network Neuromorphic is a relatively new interdisciplinary research topic, which employs various fields of science and technology, such as electronic, computer, and biology. Neuromorphic systems consist of software/hardware systems, which are utilized to implement the neural networks based on human brain functionalities. The goal of neuromorphic systems is to mimic the biologically inspired concepts of th...
Efficient Design of Spiking Neural Network With STDP Learning Based on Fast CORDIC In emerging Spiking Neural Network (SNN) based neuromorphic hardware design, energy efficiency and on-line learning are attractive advantages mainly contributed by bio-inspired local learning with nonlinear dynamics and at the cost of associated hardware complexity. This paper presents a novel SNN design employing fast COordinate Rotation DIgital Computer (CORDIC) algorithm to achieve fast spike t...
Application of Deep Compression Technique in Spiking Neural Network Chip. In this paper, a reconfigurable and scalable spiking neural network processor, containing 192 neurons and 6144 synapses, is developed. By using deep compression technique in spiking neural network chip, the amount of physical synapses can be reduced to 1/16 of that needed in the original network, while the accuracy is maintained. This compression technique can greatly reduce the number of SRAMs inside the chip as well as the power consumption of the chip. This design achieves throughput per unit area of 1.1 GSOP/( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\text{s}\!\cdot\!\text{mm}^2$</tex-math></inline-formula> ) at 1.2 V, and energy consumed per SOP of 35 pJ. A 2-layer fully-connected spiking neural network is mapped to the chip, and thus the chip is able to realize handwritten digit recognition on MNIST with an accuracy of 91.2%.
Impossibility of distributed consensus with one faulty process The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the “Byzantine Generals” problem.
A fast and elitist multiobjective genetic algorithm: NSGA-II Multi-objective evolutionary algorithms (MOEAs) that use non-dominated sorting and sharing have been criticized mainly for: (1) their O(MN3) computational complexity (where M is the number of objectives and N is the population size); (2) their non-elitism approach; and (3) the need to specify a sharing parameter. In this paper, we suggest a non-dominated sorting-based MOEA, called NSGA-II (Non-dominated Sorting Genetic Algorithm II), which alleviates all of the above three difficulties. Specifically, a fast non-dominated sorting approach with O(MN2) computational complexity is presented. Also, a selection operator is presented that creates a mating pool by combining the parent and offspring populations and selecting the best N solutions (with respect to fitness and spread). Simulation results on difficult test problems show that NSGA-II is able, for most problems, to find a much better spread of solutions and better convergence near the true Pareto-optimal front compared to the Pareto-archived evolution strategy and the strength-Pareto evolutionary algorithm - two other elitist MOEAs that pay special attention to creating a diverse Pareto-optimal front. Moreover, we modify the definition of dominance in order to solve constrained multi-objective problems efficiently. Simulation results of the constrained NSGA-II on a number of test problems, including a five-objective, seven-constraint nonlinear problem, are compared with another constrained multi-objective optimizer, and the much better performance of NSGA-II is observed
Language-based information-flow security Current standard security practices do not provide substantial assurance that the end-to-end behavior of a computing system satisfies important security policies such as confidentiality. An end-to-end confidentiality policy might assert that secret input data cannot be inferred by an attacker through the attacker's observations of system output; this policy regulates information flow. Conventional security mechanisms such as access control and encryption do not directly address the enforcement of information-flow policies. Previously, a promising new approach has been developed: the use of programming-language techniques for specifying and enforcing information-flow policies. In this paper, we survey the past three decades of research on information-flow security, particularly focusing on work that uses static program analysis to enforce information-flow policies. We give a structured view of work in the area and identify some important open challenges.
Fundamental control algorithms in mobile networks In this work we propose simple and efficient protocols forcounting and leader election in mobile networks. For mobilenetworks with fixed base stations we provide a new andvery efficient protocol for counting the number of mobilehosts. The main part of the work concentrates on ad-hocnetworks (no fixed subnetwork). We provide a model forthese networks and leader election (and a special form ofcounting) protocols for both named and anonymous mobilehosts. In this work we define two protocol ...
Bundled execution of recurring traces for energy-efficient general purpose processing Technology scaling has delivered on its promises of increasing device density on a single chip. However, the voltage scaling trend has failed to keep up, introducing tight power constraints on manufactured parts. In such a scenario, there is a need to incorporate energy-efficient processing resources that can enable more computation within the same power budget. Energy efficiency solutions in the past have typically relied on application specific hardware and accelerators. Unfortunately, these approaches do not extend to general purpose applications due to their irregular and diverse code base. Towards this end, we propose BERET, an energy-efficient co-processor that can be configured to benefit a wide range of applications. Our approach identifies recurring instruction sequences as phases of "temporal regularity" in a program's execution, and maps suitable ones to the BERET hardware, a three-stage pipeline with a bundled execution model. This judicious off-loading of program execution to a reduced-complexity hardware demonstrates significant savings on instruction fetch, decode and register file accesses energy. On average, BERET reduces energy consumption by a factor of 3-4X for the program regions selected across a range of general-purpose and media applications. The average energy savings for the entire application run was 35% over a single-issue in-order processor.
A dynamically reconfigurable system-on-a-chip architecture for future mobile digital signal processing The evolving of current and future broadband access techniques into the wireless domain introduces new and flexible network architectures with difficult and interesting challenges. The system designers are faced with a challenging set of problems that stem from access mechanisms, energy conservation, error rate, transmission speed characteristics of the wireless links and mobility aspects. This paper presents first the major challenges in realizing flexible microelectronic system solutions for digital baseband signal processing in future mobile communication applications. Based thereupon, the architecture design of flexible system-on-a-chip solutions is discussed. The focus of the paper is the introduction of a new parallel and dynamically reconfigurable hardware architecture tailored to this application area. Its performance issues and potential are discussed by the implementation of a flexible and computation-intensive component of future mobile terminals.
NDC: Analyzing the impact of 3D-stacked memory+logic devices on MapReduce workloads While Processing-in-Memory has been investigated for decades, it has not been embraced commercially. A number of emerging technologies have renewed interest in this topic. In particular, the emergence of 3D stacking and the imminent release of Micron's Hybrid Memory Cube device have made it more practical to move computation near memory. However, the literature is missing a detailed analysis of a killer application that can leverage a Near Data Computing (NDC) architecture. This paper focuses on in-memory MapReduce workloads that are commercially important and are especially suitable for NDC because of their embarrassing parallelism and largely localized memory accesses. The NDC architecture incorporates several simple processing cores on a separate, non-memory die in a 3D-stacked memory package; these cores can perform Map operations with efficient memory access and without hitting the bandwidth wall. This paper describes and evaluates a number of key elements necessary in realizing efficient NDC operation: (i) low-EPI cores, (ii) long daisy chains of memory devices, (iii) the dynamic activation of cores and SerDes links. Compared to a baseline that is heavily optimized for MapReduce execution, the NDC design yields up to 15X reduction in execution time and 18X reduction in system energy.
A 1V 3.5 μW Bio-AFE With Chopper-Capacitor-Chopper Integrator-Based DSL and Low Power GM-C Filter This brief presents a low-noise, low-power bio-signal acquisition analog front-end (Bio-AFE). It mainly includes a capacitively coupled chopper-stabilized instrumentation amplifier (CCIA), a programmable gain amplifier (PGA), a low-pass filter (LPF), and a successive approximation analog to digital converter (SAR ADC). A chopper-capacitor-chopper integrator based DC servo loop (C3IB-DSL...
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
Effectiveness of HT-assisted sinkhole and blackhole denial of service attacks targeting mesh networks-on-chip. There are ample opportunities at both design and manufacturing phases to meddle in a many-core chip system, especially its underlining communication fabric, known as the networks-on-chip (NoC), through the inclusion of malicious hardware Trojans (HT). In this paper, we focus on studying two specific HT-assisted Denial-of-Service (DoS) attacks, namely the sinkhole and blackhole attacks, that directly target the NoC of a many-core chip. As of the blackhole attacks, those intermediate routers with inserted HTs can stop forwarding data packets/flits towards the packets’ destination; instead, packets are either dropped from the network or diverted to some other malicious nodes. Sinkhole attacks, which exhibit similar attack effects as blackhole attacks, can occur when the NoC supports adaptive routing. In this case, a malicious node actively solicits packets from its neighbor nodes by pretending to have sufficient free buffer slots. Effects and efficiencies of both sinkhole and blackhole DoS attacks are modeled and quantified in this paper, and a few factors that influence attack effects are found to be critical. Through fine-tuning of these parameters, both attacks are shown to cause more damages to the NoC, measured as over 30% increase in packet loss rate. Even with current detection and defense methods in place, the packet loss rate is still remarkably high, suggesting the need of new and more effective detection and defense methods against the enhanced blackhole and sinkhole attacks as described in the paper.
Energy Efficient Run-Time Incremental Mapping for 3-D Networks-on-Chip 3-D Networks-on-Chip(NoC) emerge as a potent solution to address both the interconnection and design complexity problems facing future Multiprocessor System-on-Chips(MPSoCs).Effective run-time mapping on such 3-D NoC-based MPSoCs can be quite challenging,as the arrival order and task graphs of the target applications are typically not known a priori,which can be further complicated by stringent energy requirements for NoC systems.This paper thus presents an energy-aware run-time incremental mapping algorithm(ERIM) for 3-D NoC which can minimize the energy consumption due to the data communications among processor cores,while reducing the fragmentation effect on the incoming applications to be mapped,and simultaneously satisfying the thermal constraints imposed on each incoming application.Specifically,incoming applications are mapped to cuboid tile regions for lower energy consumption of communication and the minimal routing.Fragment tiles due to system fragmentation can be gleaned for better resource utilization.Extensive experiments have been conducted to evaluate the performance of the proposed algorithm ERIM,and the results are compared against the optimal mapping algorithm(branch-and-bound) and two heuristic algorithms(TB and TL).The experiments show that ERIM outperforms TB and TL methods with significant energy saving(more than 10%),much reduced average response time,and improved system utilization.
A Security Framework for NoC Using Authenticated Encryption and Session Keys Abstract Network on Chip (NoC) is an emerging solution to the existing scalability problems with System on Chip (SoC). However, it is exposed to security threats like extraction of secret information from IP cores. In this paper we present an Authenticated Encryption (AE)-based security framework for NoC based systems. The security framework resides in Network Interface (NI) of every IP core allowing secure communication among such IP cores. The secure cores can communicate using permanent keys whereas temporary session keys are used for communication between secure and non-secure cores. A traffic limiting counter is used to prevent bandwidth denial and access rights table avoids unauthorized memory accesses. We simulated and implemented our framework using Verilog/VHDL modules on top of NoCem emulator. The results showed tolerable area overhead and did not affect the network performance apart from some initial latency.
Secure Model Checkers for Network-on-Chip (NoC) Architectures. As chip multiprocessors (CMPs) are becoming more susceptible to process variation, crosstalk, and hard and soft errors, emerging threats from rogue employees in a compromised foundry are creating new vulnerabilities that could undermine the integrity of our chips with malicious alterations. As the Network-on-Chip (NoC) is a focal point of sensitive data transfer and critical device coordination, there is an urgent demand for secure and reliable communication. In this paper we propose Secure Model Checkers (SMCs), a real-time solution for control logic verification and functional correctness in the micro-architecture to detect Hardware Trojan (HT) induced denial-of-service attacks and improve reliability. In our evaluation, we show that SMCs provides significant security enhancements in real-time with only 1.5% power and 1.1% area overhead penalty in the micro-architecture.
Scratchpad memory: design alternative for cache on-chip memory in embedded systems In this paper we address the problem of on-chip memory selection for computationally intensive applications, by proposing scratch pad memory as an alternative to cache. Area and energy for different scratch pad and cache sizes are computed using the CACTI tool while performance was evaluated using the trace results of the simulator. The target processor chosen for evaluation was AT91M40400. The results clearly establish scratehpad memory as a low power alternative in most situations with an average energy reducation of 40%. Further the average area-time reduction for the seratchpad memory was 46% of the cache memory.
On-Chip Interconnection Architecture of the Tile Processor iMesh, the Tile Processor Architecture's on-chip interconnection network, connects the multicore processor's tiles with five 2D mesh networks, each specialized for a different use. Taking advantage of the five networks, the c-based iLib interconnection library efficiently maps program communication across the on-chip interconnect. The Tile Processor's first implementation, the TILE64, contains 64 cores and can execute 192 billion 32-bit operations per second at 1 GHz.
The part-time parliament Recent archaeological discoveries on the island of Paxos reveal that the parliament functioned despite the peripatetic propensity of its part-time legislators. The legislators maintained consistent copies of the parliamentary record, despite their frequent forays from the chamber and the forgetfulness of their messengers. The Paxon parliament's protocol provides a new way of implementing the state machine approach to the design of distributed systems.
Measuring the Gap Between FPGAs and ASICs ABSTRACT This paper presents experimental measurements of the differences between a 90nm CMOS FPGA and 90nm CMOS Standard Cell ASICs in terms of logic density, circuit speed and power consumption. We are motivated to make these measurements to enable system designers to make better informed choices between these two media and to give insight to FPGA makers on the deciencies to attack and thereby improve FPGAs. In the paper, we describe the methodology by which the measurements were obtained and we show that, for circuits containing only combinational logic and,ipops, the ratio of silicon area required to implement them in FPGAs and ASICs is on average 40. Modern FPGAs also contain \hard" blocks such as multiplier/accumulators and block memories,and we nd,that these blocks reduce this average area gap signican tly to as little as 21. The ratio of critical path delay, from FPGA to ASIC, is roughly 3 to 4, with less inuence from block memory and hard multipliers. The dynamic power consumption ratio is approximately 12 times and, with hard blocks, this gap generally becomes smaller. Categories and Subject Descriptors
Language-based information-flow security Current standard security practices do not provide substantial assurance that the end-to-end behavior of a computing system satisfies important security policies such as confidentiality. An end-to-end confidentiality policy might assert that secret input data cannot be inferred by an attacker through the attacker's observations of system output; this policy regulates information flow. Conventional security mechanisms such as access control and encryption do not directly address the enforcement of information-flow policies. Previously, a promising new approach has been developed: the use of programming-language techniques for specifying and enforcing information-flow policies. In this paper, we survey the past three decades of research on information-flow security, particularly focusing on work that uses static program analysis to enforce information-flow policies. We give a structured view of work in the area and identify some important open challenges.
Fundamental control algorithms in mobile networks In this work we propose simple and efficient protocols forcounting and leader election in mobile networks. For mobilenetworks with fixed base stations we provide a new andvery efficient protocol for counting the number of mobilehosts. The main part of the work concentrates on ad-hocnetworks (no fixed subnetwork). We provide a model forthese networks and leader election (and a special form ofcounting) protocols for both named and anonymous mobilehosts. In this work we define two protocol ...
Master Data Quality Barriers: An Empirical Investigation Purpose - The development of IT has enabled organizations to collect and store many times more data than they were able to just decades ago. This means that companies are now faced with managing huge amounts of data, which represents new challenges in ensuring high data quality. The purpose of this paper is to identify barriers to obtaining high master data quality.Design/methodology/approach - This paper defines relevant master data quality barriers and investigates their mutual importance through organizing data quality barriers identified in literature into a framework for analysis of data quality. The importance of the different classes of data quality barriers is investigated by a large questionnaire study, including answers from 787 Danish manufacturing companies.Findings - Based on a literature review, the paper identifies 12 master data quality barriers. The relevance and completeness of this classification is investigated by a large questionnaire study, which also clarifies the mutual importance of the defined barriers and the differences in importance in small, medium, and large companies.Research limitations/implications - The defined classification of data quality barriers provides a point of departure for future research by pointing to relevant areas for investigation of data quality problems. The limitations of the study are that it focuses only on manufacturing companies and master data (i.e. not transaction data).Practical implications - The classification of data quality barriers can give companies increased awareness of why they experience data quality problems. In addition, the paper suggests giving primary focus to organizational issues rather than perceiving poor data quality as an IT problem.Originality/value - Compared to extant classifications of data quality barriers, the contribution of this paper represents a more detailed and complete picture of what the barriers are in relation to data quality. Furthermore, the presented classification has been investigated by a large questionnaire study, for which reason it is founded on a more solid empirical basis than existing classifications.
An Opportunistic Cognitive MAC Protocol for Coexistence with WLAN In last decades, the demand of wireless spectrum has increased rapidly with the development of mobile communication services. Recent studies recognize that traditional fixed spectrum assignment does not use spectrum efficiently. Such a wasting phenomenon could be amended after the present of cognitive radio. Cognitive radio is a new type of technology that enables secondary usage to unlicensed user. This paper presents an opportunistic cognitive MAC protocol (OC-MAC) for cognitive radios to access unoccupied resource opportunistically and coexist with wireless local area network (WLAN). By a primary traffic predication model and transmission etiquette, OC-MAC avoids producing fatal damage to licensed users. Then a ns2 simulation model is developed to evaluate its performance in scenarios with coexisting WLAN and cognitive network.
Towards elastic SDR architectures using dynamic task management. SDR platforms integrating several types and numbers of processing elements in System-on-Chips become an attractive solution for baseband processing in wireless systems. In order to cope with the diversity of protocol applications and the heterogeneity of multi-core architectures, a hierarchical approach for workload distribution is proposed in this paper. Specifically, a system-level scheduler is employed to map applications to multiple processing clusters, complemented with a cluster-level scheduler - the CoreManager - for dynamic resource allocation and configuration as well as for task and data scheduling. A performance analysis of the proposed approach is presented, which shows the advantages of dynamic scheduling against a static approach for variable workloads in the LTE-Advanced uplink multi-user scenarios.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.2
0.2
0.2
0.2
0.1
0.066667
0
0
0
0
0
0
0
0
Estimating and sampling graphs with multidimensional random walks Estimating characteristics of large graphs via sampling is a vital part of the study of complex networks. Current sampling methods such as (independent) random vertex and random walks are useful but have drawbacks. Random vertex sampling may require too many resources (time, bandwidth, or money). Random walks, which normally require fewer resources per sample, can suffer from large estimation errors in the presence of disconnected or loosely connected graphs. In this work we propose a new m-dimensional random walk that uses m dependent random walkers. We show that the proposed sampling method, which we call Frontier sampling, exhibits all of the nice sampling properties of a regular random walk. At the same time, our simulations over large real world graphs show that, in the presence of disconnected or loosely connected components, Frontier sampling exhibits lower estimation errors than regular random walks. We also show that Frontier sampling is more suitable than random vertex sampling to sample the tail of the degree distribution of the graph.
An Identity-Free and On-Demand Routing Scheme against Anonymity Threats in Mobile Ad Hoc Networks Introducing node mobility into the network also introduces new anonymity threats. This important change of the concept of anonymity has recently attracted attentions in mobile wireless security research. This paper presents identity-free routing and on-demand routing as two design principles of anonymous routing in mobile ad hoc networks. We devise ANODR (ANonymous On-Demand Routing) as the needed anonymous routing scheme that is compliant with the design principles. Our security analysis and simulation study verify the effectiveness and efficiency of ANODR.
Space-Optimal Counting in Population Protocols. In this paper, we study the fundamental problem of counting, which consists in computing the size of a system. We consider the distributed communication model of population protocols of finite state, anonymous and asynchronous mobile devices agents communicating in pairs according to a fairness condition. This work significantly improves the previous results known for counting in this model, in terms of exact space complexity. We present and prove correct the first space-optimal protocols solving the problem for two classical types of fairness, global and weak. Both protocols require no initialization of the counted agents. The protocol designed for global fairness, surprisingly, uses only one bit of memory two states per counted agent. The protocol, functioning under weak fairness, requires the necessary $$\\log P$$ bits P states, per counted agent to be able to count up to P agents. Interestingly, this protocol exploits the intriguing Gros sequence of natural numbers, which is also used in the solutions to the Chinese Rings and the Hanoi Towers puzzles.
Computing anonymously with arbitrary knowledge
Naming and Counting in Anonymous Unknown Dynamic Networks In this work, we study the fundamental naming and counting problems (and some variations) in networks that are anonymous, unknown, and possibly dynamic. In counting, nodes must determine the size of the network n and in naming they must end up with unique identities. By anonymous we mean that all nodes begin from identical states apart possibly from a unique leader node and by unknown that nodes have no a priori knowledge of the network (apart from some minimal knowledge when necessary) including ignorance of n. Network dynamicity is modeled by the 1-interval connectivity model [KLO10], in which communication is synchronous and a (worst-case) adversary chooses the edges of every round subject to the condition that each instance is connected. We first focus on static networks with broadcast where we prove that, without a leader, counting is impossible to solve and that naming is impossible to solve even with a leader and even if nodes know n. These impossibilities carry over to dynamic networks as well. We also show that a unique leader suffices in order to solve counting in linear time. Then we focus on dynamic networks with broadcast. We conjecture that dynamicity renders nontrivial computation impossible. In view of this, we let the nodes know an upper bound on the maximum degree that will ever appear and show that in this case the nodes can obtain an upper bound on n. Finally, we replace broadcast with one-to-each, in which a node may send a different message to each of its neighbors. Interestingly, this natural variation is proved to be computationally equivalent to a full-knowledge model, in which unique names exist and the size of the network is known.
Information dissemination in highly dynamic graphs We investigate to what extent flooding and routing is possible if the graph is allowed to change unpredictably at each time step. We study what minimal requirements are necessary so that a node may correctly flood or route a message in a network whose links may change arbitrarily at any given point, subject to the condition that the underlying graph is connected. We look at algorithmic constraints such as limited storage, no knowledge of an upper bound on the number of nodes, and no usage of identifiers. We look at flooding as well as routing to some existing specified destination and give algorithms.
ALOHA packet system with and without slots and capture This paper was originally distributed informally as ARPA Satellite System Note 8 on June 26, 1972. The paper is an important one and since its initial limited distribution, the paper has been frequently referenced in the open literature, but the paper itself has been unavailable in the open literature. Publication here is meant to correct the previous gap in the literature. As the paper was originally distributed only to other researchers intimately familiar with the area covered by the paper, the paper makes few concessions to the reader along the lines of introductory or tutorial material. Therefore, a bit of background material follows. ALOHA packet systems were originally described by Abramson ("The ALOHA System--Another Alternative for Computer Communication," Proceedings of the AFIPS Fall Joint Computer Conference, Vol. 37, 1970, pp. 281--285). In an ALOHA a single broadcast channel is shared by a number of communicating devices. In the version originally described by Abramson, every device transmits its packets independent of any other device or any specific time. That is, the device transmits the whole packet at a random point in time; the device then times out for receiving an acknowledgment. If an acknowledgment is not received, it is assumed that a collision occured with a packet transmitted by some other device and the packet is retransmitted after a random additional waiting time (to avoid repeated collisions). Under a certain set of assumptions, Abramson showed that the effective capacity of such a channel is 1/(2e). Roberts in the present paper investigates methods of increasing the effective channel capacity of such a channel. One method he proposes to gain in capacity is to consider the channel to be slotted into segments of time whose duration is equal to the packet transmission time, and to require the devices to begin a packet transmission at the beginning of a time slot. Another method Roberts proposes to gain in capacity is to take advantage of the fact that even though packets from two devices collide in the channel (i.e., they are transmitted so they pass through the channel at overlapping times), it may be possible for the receiver(s) to "capture" the signal of one of the transmitters, and thus correctly receive one of the conflicting packets, if one of the transmitters has a sufficiently greater signal than the other. Roberts considers the cases of both satellite and ground radio channels.
A tight lower bound on the cover time for random walks on graphs We prove that the expected time for a random walk to cover all n vertices of a graph is at least (1 + o(1))n In n. © 1995 Wiley Periodicals, Inc.
Information-driven dynamic sensor collaboration This article overviews the information-driven approach to sensor collaboration in ad hoc sensor networks. The main idea is for a network to determine participants in a "sensor collaboration" by dynamically optimizing the information utility of data for a given cost of communication and computation. A definition of information utility is introduced, and several approximate measures of the information utility are developed for reasons of computational tractability. We illustrate the use of this approach using examples drawn from tracking applications
An Integrated Full-Wave CMOS Rectifier With Built-In Back Telemetry for RFID and Implantable Biomedical Applications This paper describes the design and implementation of an integrated full-wave standard CMOS rectifier with built-in passive back telemetry mechanism for radio frequency identification (RFID) and implantable biomedical device applications. The new rectifier eliminates the need for additional large switches for load modulation and provides more flexibility in choosing the most appropriate load shift keying (LSK) mechanism through shorting and/or opening the transponder coil for any certain application. The results are a more robust back telemetry link, improved read range, higher back telemetry data rate, reduced rectifier dropout voltage, and saving in chip area compared to the traditional topologies. A prototype version of the new rectifier is implemented in the AMI 0.5- mum n-well 3-metal 2-poly 5 V standard CMOS process, occupying ~ 0.25 mm2 of chip area. The prototype rectifier was powered through a wireless inductive link and proved to be fully functional in its three modes of operation: rectification, open coil (OC), and short coil (SC).
A bridging model for parallel computation, communication, and I/O
An Opportunistic Cognitive MAC Protocol for Coexistence with WLAN In last decades, the demand of wireless spectrum has increased rapidly with the development of mobile communication services. Recent studies recognize that traditional fixed spectrum assignment does not use spectrum efficiently. Such a wasting phenomenon could be amended after the present of cognitive radio. Cognitive radio is a new type of technology that enables secondary usage to unlicensed user. This paper presents an opportunistic cognitive MAC protocol (OC-MAC) for cognitive radios to access unoccupied resource opportunistically and coexist with wireless local area network (WLAN). By a primary traffic predication model and transmission etiquette, OC-MAC avoids producing fatal damage to licensed users. Then a ns2 simulation model is developed to evaluate its performance in scenarios with coexisting WLAN and cognitive network.
Kinesis: a security incident response and prevention system for wireless sensor networks This paper presents Kinesis, a security incident response and prevention system for wireless sensor networks, designed to keep the network functional despite anomalies or attacks and to recover from attacks without significant interruption. Due to the deployment of sensor networks in various critical infrastructures, the applications often impose stringent requirements on data reliability and service availability. Given the failure- and attack-prone nature of sensor networks, it is a pressing concern to enable the sensor networks provide continuous and unobtrusive services. Kinesis is quick and effective in response to incidents, distributed in nature, and dynamic in selecting response actions based on the context. It is lightweight in terms of response policy specification, and communication and energy overhead. A per-node single timer based distributed strategy to select the most effective response executor in a neighborhood makes the system simple and scalable, while achieving proper load distribution and redundant action optimization. We implement Kinesis in TinyOS and measure its performance for various application and network layer incidents. Extensive TOSSIM simulations and testbed experiments show that Kinesis successfully counteracts anomalies/attacks and behaves consistently under various attack scenarios and rates.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.1176
0.112
0.112
0.041592
0.030598
0.008906
0.00234
0.000618
0
0
0
0
0
0
A 1-to-1-kHz, 4.2-to-544-nW, Multi-Level Comparator Based Level-Crossing ADC for IoT Applications. This brief presents the design of an ultra-low power level-crossing analog-to-digital converter (LC-ADC) for IoT and biomedical applications. The proposed LC-ADC utilizes only one multi-level comparator instead of multiple comparators as in conventional LC-ADC, leading to simplified implementation and significant reduction in power. Implemented in 0.18-μm CMOS process, the LC-ADC achieves 7.9 equi...
A Review of Algorithm & Hardware Design for AI-Based Biomedical Applications. This paper reviews the state of the arts and trends of the AI-Based biomedical processing algorithms and hardware. The algorithms and hardware for different biomedical applications such as ECG, EEG and hearing aid have been reviewed and discussed. For algorithm design, various widely used biomedical signal classification algorithms have been discussed including support vector machine (SVM), back propagation neural network (BPNN), convolutional neural networks (CNN), probabilistic neural networks (PNN), recurrent neural networks (RNN), Short-term Memory Network (LSTM), fuzzy neural network and etc. The pros and cons of the classification algorithms have been analyzed and compared in the context of application scenarios. The research trends of AI-Based biomedical processing algorithms and applications are also discussed. For hardware design, various AI-Based biomedical processors have been reviewed and discussed, including ECG classification processor, EEG classification processor, EMG classification processor and hearing aid processor. Various techniques on architecture and circuit level have been analyzed and compared. The research trends of the AI-Based biomedical processor have also been discussed.
A Memristor-Based Continuous-Time Digital FIR Filter for Biomedical Signal Processing This paper proposes a new timing storage circuit based on memristors. Its ability to store and reproduce timing information in an analog manner without performing quantization can be useful for a wide range of applications. For continuous-time (CT) digital filters, the power and area costly analog delay blocks, which are usually implemented as inverter chains or their variants, can be replaced by the proposed timing storage circuits to delay CT digital signals in a more efficient way, especially for low-frequency biomedical applications that require very long tap delays. In addition, the same timing storage circuits also enable the storage of CT digital signals, extending the benefits of CT digital signal processing (DSP) to applications that require signal storage. As an example, a 15-tap CT finite impulse response (FIR) Savitzky-Golay (S-G) filter was designed with memristor-based delay blocks to smoothen electrocardiographic (ECG) signals accompanied with high-frequency noise. The simulated power consumption under a 3.3-volt supply was 6.63 .
An ECG recording front-end with continuous-time level-crossing sampling. An ECG recording front-end with a continuous- time asynchronous level-crossing analog-to-digital converter (LC-ADC) is proposed. The system is a voltage and current mixed-mode system, which comprises a low noise amplifier (LNA), a programmable voltage-to-current converter (PVCC) as a programmable gain amplifier (PGA) and an LC-ADC with calibration DACs and an RC oscillator. The LNA shows an input referred noise of 3.77 μVrms over 0.06 Hz-950 Hz bandwidth. The total harmonic distortion (THD) of the LNA is 0.15% for a 10 mVPP input. The ECG front-end consumes 8.49 μW from a 1 V supply and achieves an ENOB up to 8 bits. The core area of the proposed front-end is 690 ×710 μm2, fabricated in a 0.18 μm CMOS technology.
Empowering Things with Intelligence: A Survey of the Progress, Challenges, and Opportunities in Artificial Intelligence of Things In the Internet-of-Things (IoT) era, billions of sensors and devices collect and process data from the environment, transmit them to cloud centers, and receive feedback via the Internet for connectivity and perception. However, transmitting massive amounts of heterogeneous data, perceiving complex environments from these data, and then making smart decisions in a timely manner are difficult. Artif...
A Level-Crossing Based QRS-Detection Algorithm for Wearable ECG Sensors In this paper, an asynchronous analog-to-information conversion system is introduced for measuring the RR intervals of the electrocardiogram (ECG) signals. The system contains a modified level-crossing analog-to-digital converter and a novel algorithm for detecting the R-peaks from the level-crossing sampled data in a compressed volume of data. Simulated with MIT-BIH Arrhythmia Database, the proposed system delivers an average detection accuracy of 98.3%, a sensitivity of 98.89%, and a positive prediction of 99.4%. Synthesized in 0.13 μm CMOS technology with a 1.2 V supply voltage, the overall system consumes 622 nW with core area of 0.136 mm (2), which make it suitable for wearable wireless ECG sensors in body-sensor networks.
Evaluation of Level-Crossing ADCs for Event-Driven ECG Classification In this paper, a new methodology for choosing design parameters of level-crossing analog-to-digital converters (LC-ADCs) is presented that improves sampling accuracy and reduces the data stream rate. Using the MIT-BIH Arrhythmia dataset, several LC-ADC models are designed, simulated and then evaluated in terms of compression and signal-to-distortion ratio. A new one-dimensional convolutional neura...
Cognitive radio: brain-empowered wireless communications Cognitive radio is viewed as a novel approach for improving the utilization of a precious natural resource: the radio electromagnetic spectrum. The cognitive radio, built on a software-defined radio, is defined as an intelligent wireless communication system that is aware of its environment and uses the methodology of understanding-by-building to learn from the environment and adapt to statistical variations in the input stimuli, with two primary objectives in mind: · highly reliable communication whenever and wherever needed; · efficient utilization of the radio spectrum. Following the discussion of interference temperature as a new metric for the quantification and management of interference, the paper addresses three fundamental cognitive tasks. 1) Radio-scene analysis. 2) Channel-state estimation and predictive modeling. 3) Transmit-power control and dynamic spectrum management. This work also discusses the emergent behavior of cognitive radio.
The gem5 simulator The gem5 simulation infrastructure is the merger of the best aspects of the M5 [4] and GEMS [9] simulators. M5 provides a highly configurable simulation framework, multiple ISAs, and diverse CPU models. GEMS complements these features with a detailed and exible memory system, including support for multiple cache coherence protocols and interconnect models. Currently, gem5 supports most commercial ISAs (ARM, ALPHA, MIPS, Power, SPARC, and x86), including booting Linux on three of them (ARM, ALPHA, and x86). The project is the result of the combined efforts of many academic and industrial institutions, including AMD, ARM, HP, MIPS, Princeton, MIT, and the Universities of Michigan, Texas, and Wisconsin. Over the past ten years, M5 and GEMS have been used in hundreds of publications and have been downloaded tens of thousands of times. The high level of collaboration on the gem5 project, combined with the previous success of the component parts and a liberal BSD-like license, make gem5 a valuable full-system simulation tool.
Formal verification in hardware design: a survey In recent years, formal methods have emerged as an alternative approach to ensuring the quality and correctness of hardware designs, overcoming some of the limitations of traditional validation techniques such as simulation and testing.There are two main aspects to the application of formal methods in a design process: the formal framework used to specify desired properties of a design and the verification techniques and tools used to reason about the relationship between a specification and a corresponding implementation. We survey a variety of frameworks and techniques proposed in the literature and applied to actual designs. The specification frameworks we describe include temporal logics, predicate logic, abstraction and refinement, as well as containment between &ohgr;-regular languages. The verification techniques presented include model checking, automata-theoretic techniques, automated theorem proving, and approaches that integrate the above methods.In order to provide insight into the scope and limitations of currently available techniques, we present a selection of case studies where formal methods were applied to industrial-scale designs, such as microprocessors, floating-point hardware, protocols, memory subsystems, and communications hardware.
Constrained Consensus and Optimization in Multi-Agent Networks We present distributed algorithms that can be used by multiple agents to align their estimates with a particular value over a network with time-varying connectivity. Our framework is general in that this value can represent a consensus value among multiple agents or an optimal solution of an optimization problem, where the global objective function is a combination of local agent objective functions. Our main focus is on constrained problems where the estimate of each agent is restricted to lie in a different constraint set. To highlight the effects of constraints, we first consider a constrained consensus problem and present a distributed ``projected consensus algorithm'' in which agents combine their local averaging operation with projection on their individual constraint sets. This algorithm can be viewed as a version of an alternating projection method with weights that are varying over time and across agents. We establish convergence and convergence rate results for the projected consensus algorithm. We next study a constrained optimization problem for optimizing the sum of local objective functions of the agents subject to the intersection of their local constraint sets. We present a distributed ``projected subgradient algorithm'' which involves each agent performing a local averaging operation, taking a subgradient step to minimize its own objective function, and projecting on its constraint set. We show that, with an appropriately selected stepsize rule, the agent estimates generated by this algorithm converge to the same optimal solution for the cases when the weights are constant and equal, and when the weights are time-varying but all agents have the same constraint set.
Design Aspects of an Active Electromagnetic Suspension System for Automotive Applications. This paper is concerned with the design aspects of an active electromagnet suspension system for automotive appli- cations which combines a brushless tubular permanent magnet actuator (TPMA) with a passive spring. This system provides for additional stability and safety by performing active roll and pitch control during cornering and braking. Furthermore, elimination of the road irregularities is possible, hence passenger drive comfort is increased. Based upon measurements, static and dynamic specifications of the actuator are derived. The electro magnetic suspension is installed on a quarter car test setup, and the improved performance using roll control is measured and compared to a commercial passive system. An alternative design using a slotless external magnet tubular actuator is proposed which fulfills the derived performance, thermal and volume specifications.
Formal Analysis of Leader Election in MANETs Using Real-Time Maude.
An Energy-Efficient SAR ADC With Event-Triggered Error Correction This brief presents an energy-efficient fully differential 10-bit successive approximation register (SAR) analog-to-digital converter (ADC) with a resolution of 10 bits and a sampling rate of 320 kS/s. The optimal capacitor split and bypass number is analyzed to achieve the highest switching energy efficiency. The common-mode voltage level remains constant during the MSB-capacitor switching cycles. To minimize nonlinearity due to charge averaging voltage offset or DAC array mismatch, an event-triggered error correction method is employed as a redundant cycle for detecting digital code errors within 1 least significant bit (LSB). A test chip was fabricated using the 180-nm CMOS process and occupied a 0.0564-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> core area. Under a regular 0.65-V supply voltage, the ADC achieved an effective number of bits of 9.61 bits and a figure of merit (FOM) of 6.38 fJ/conversion step, with 1.6- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${ \mu }\text{W}$ </tex-math></inline-formula> power dissipation for a low-frequency input. The measured differential and integral nonlinearity results are within 0.30 LSB and 0.43 LSB, respectively.
1.1
0.1
0.1
0.1
0.1
0.05
0.033333
0
0
0
0
0
0
0
A Double-Tail Latch-Type Voltage Sense Amplifier with 18ps Setup+Hold Time.
Predicting Data-Dependent Jitter An analysis for calculating data-dependent jitter (DDJ) in a first-order system is introduced. The predicted DDJ features unique threshold crossing times with self-similar geometry. An approximation for DDJ in second-order systems is described in terms of the damping factor and natural frequency. Higher order responses demonstrate conditions under which unique threshold crossing times do not exist...
A 3.0 Gb/s clock data recovery circuits based on digital DLL for clock-embedded display interface.
A 0.65-to-10.5 Gb/s Reference-Less CDR With Asynchronous Baud-Rate Sampling for Frequency Acquisition and Adaptive Equalization This paper presents a continuous-rate reference-less clock and data recovery (CDR) circuit with an asynchronous baud-rate sampling to achieve an adaptive equalization as well as a data rate acquisition. The proposed scheme also enables the use of a successive approximation register (SAR) based approach in the frequency acquisition and results in a fast coarse lock process. The CDR guarantees a robust operation of a fine locking even in the presence of large input data jitter due to the adaptive equalization and a jitter-tolerable rotation frequency detector (RFD) that eliminates a dead-zone problem with a simple circuitry. The fabricated CDR in 65 nm CMOS shows a wide lock range of 0.65-to-10.5 Gb/s at a bit error rate (BER) of . The CDR consumes 26 mW from a single supply voltage of 1 V at 10 Gb/s including the power consumption for equalizer. By an adaptive current bias control, the power consumption is also linearly scaled down with the data rate, exhibiting a slope of about 2 mW decrease per Gb/s.
A Reference-Less Clock and Data Recovery Circuit Using Phase-Rotating Phase-Locked Loop A reference-less half-rate digital clock and data recovery (CDR) circuit employing a phase-rotating phase-locked loop (PRPLL) as phase interpolator is presented. By implementing the proportional control in phase domain within the PRPLL, the proposed CDR decouples jitter transfer (JTRAN) bandwidth from jitter tolerance (JTOL) corner frequency, eliminates jitter peaking, and removes JTRAN dependence on bang-bang phase detector gain. Fabricated in a 90 nm CMOS process, the prototype CDR achieves error-free operation (BER <; 10-12) with PRBS data sequences ranging from PRBS7 to PRBS31. At 5 Gb/s, it consumes 13.1 mW power and achieves a recovered clock long-term jitter of 5.0 ps rms/44.0 ps pp when operating with PRBS31 input data. The measured JTRAN bandwidth is 2 MHz and JTOL corner frequency is 16 MHz. The CDR is tolerant to 110 mV pp of sinusoidal noise on the DCO supply voltage at the worst case noise frequency of 7 MHz. At 2.5 GHz, the PRPLL consumes 2.9 mW and achieves -134 dBc/Hz phase noise at 1 MHz frequency offset. The differential and integral non-linearity of its digital-to-phase transfer characteristic are within ±0.2 LSB and ±0.4 LSB, respectively.
An 8 Bit 4 GS/s 120 mW CMOS ADC A time-interleaved ADC employs four pipelined time-interleaved channels along with a new timing mismatch detection algorithm and a high-resolution variable delay line. The digital background calibration technique suppresses the interchannel timing mismatches, achieving an SNDR of 44.4 dB and a figure of merit of 219 fJ/conversion-step in 65 nm CMOS technology.
Correction of Mismatches in a Time-Interleaved Analog-to-Digital Converter in an Adaptively Equalized Digital Communication Receiver In this paper, techniques to overcome the errors caused by the offset, gain, sample-time, and bandwidth mismatches among time-interleaved analog-to-digital converters in a high-speed baseband digital communication receiver are presented. The errors introduced by these mismatches are corrected using least-mean-square adaptation implemented in digital-signal-processing blocks. Gain, sample-time, and bandwidth mismatches are corrected by modifying the operation of the adaptive receive equalizer itself to minimize the hardware overhead. Simulation results show that the gain, offset, sample-time, and bandwidth mismatches are sufficiently corrected for practical digital communication receivers.
A 28-Gb/s 4-Tap FFE/15-Tap DFE Serial Link Transceiver in 32-nm SOI CMOS Technology. This paper presents a 28-Gb/s transceiver in 32-nm SOI CMOS technology for chip-to-chip communications over high-loss electrical channels such as backplanes. The equalization needed for such applications is provided by a 4-tap baud-spaced feed-forward equalizer (FFE) in the transmitter and a two-stage peaking amplifier and 15-tap decision-feedback equalizer (DFE) in the receiver. The transmitter e...
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Local Divergence of Markov Chains and the Analysis of Iterative Load-Balancing Schemes We develop a general technique for the quantitative analysis of iterative distributed load balancing schemes. We illustrate the technique by studying two simple, intuitively appealing models that are prevalent in the literature: the diffusive paradigm, and periodic balancing circuits (or the dimension exchange paradigm). It is well known that such load balancing schemes can be roughly modeled by Markov chains, but also that this approximation can be quite inaccurate. Our main contribution is an effective way of characterizing the deviation between the actual loads and the distribution generated by a related Markov chain, in terms of a natural quantity which we call the local divergence. We apply this technique to obtain bounds on the number of rounds required to achieve coarse balancing in general networks, cycles and meshes in these models. For balancing circuits, we also present bounds for the stronger requirement of perfect balancing, or counting.
Enhancing peer-to-peer content discovery techniques over mobile ad hoc networks Content dissemination over mobile ad hoc networks (MANETs) is usually performed using peer-to-peer (P2P) networks due to its increased resiliency and efficiency when compared to client-server approaches. P2P networks are usually divided into two types, structured and unstructured, based on their content discovery strategy. Unstructured networks use controlled flooding, while structured networks use distributed indexes. This article evaluates the performance of these two approaches over MANETs and proposes modifications to improve their performance. Results show that unstructured protocols are extremely resilient, however they are not scalable and present high energy consumption and delay. Structured protocols are more energy-efficient, however they have a poor performance in dynamic environments due to the frequent loss of query messages. Based on those observations, we employ selective forwarding to decrease the bandwidth consumption in unstructured networks, and introduce redundant query messages in structured P2P networks to increase their success ratio.
High-performance error amplifier for fast transient DC-DC converters. A new error amplifier is presented for fast transient response of dc-dc converters. The amplifier has low quiescent current to achieve high power conversion efficiency, but it can supply sufficient current during large-signal operation. Two comparators detect large-signal variations, and turn on extra current supplier if necessary. The amount of extra current is well controlled, so that the system...
20.3 A feedforward controlled on-chip switched-capacitor voltage regulator delivering 10W in 32nm SOI CMOS On-chip (or fully integrated) switched-capacitor (SC) voltage regulators (SCVR) have recently received a lot of attention due to their ease of monolithic integration. The use of deep trench capacitors can lead to SCVR implementations that simultaneously achieve high efficiency, high power density, and fast response time. For the application of granular power distribution of many-core microprocessor systems, the on-chip SCVR must maintain an output voltage above a certain minimum level Uout, min in order for the microprocessor core to meet setup time requirements. Following a transient load change, the output voltage typically exhibits a droop due to parasitic inductances and resistances in the power distribution network. Therefore, the steady-state output voltage is kept high enough to ensure VOUT >Vout, min at all times, thereby introducing an output voltage overhead that leads to increased system power consumption. The output voltage droop can be reduced by implementing fast regulation and a sufficient amount of on-chip decoupling capacitance. However, a large amount of on-chip decoupling capacitance is needed to significantly reduce the droop, and it becomes impractical to implement owing to the large chip area overhead required. This paper presents a feedforward control scheme that significantly reduces the output voltage droop in the presence of a large input voltage droop following a transient event. This in turn reduces the required output voltage overhead and may lead to significant overall system power savings.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.043816
0.0444
0.0444
0.0222
0.015067
0.005951
0.000889
0.000104
0
0
0
0
0
0
Analysis of the Effect of Source Capacitance and Inductance on $N$ -Path Mixers and Filters. Switch R-C passive N-path mixers and filters enable interference-robust radio receivers with a flexibly programmable center frequency defined by a digital multi-phase clock. The radio frequency (RF) range of these circuits is limited by parasitic shunt capacitances, which introduce signal loss and degrade noise figure. Moreover, the linear periodically time varying nature of switch R-C circuits re...
A 1.2-V Self-Reconfigurable Recursive Mixer With Improved IF Linearity in 130-nm CMOS. A 1.2-V self-reconfigurable recursive mixer structure with improved intermediate frequency (IF) linearity and signal isolation is proposed. For a traditional recursive mixer that reuses the gm stage to amplify both the input radio frequency (RF) and downconverted IF signal, signal isolation and linearity are limited by the signal-reusing structure. In this brief, the self-reconfigurable gm stage i...
A 0.1–3.5-GHz Duty-Cycle Measurement and Correction Technique in 130-nm CMOS A duty-cycle correction technique using a novel pulsewidth modification cell is demonstrated across a frequency range of 100 MHz–3.5 GHz. The technique works at frequencies where most digital techniques implemented in the same technology node fail. An alternative method of making time domain measurements such as duty cycle and rise/fall times from the frequency domain data is introduced. The data are obtained from the equipment that has significantly lower bandwidth than required for measurements in the time domain. An algorithm for the same has been developed and experimentally verified. The correction circuit is implemented in a 0.13- $mu text{m}$ CMOS technology and occupies an area of 0.011 mm 2 . It corrects to a residual error of less than 1%. The extent of correction is limited by the technology at higher frequencies.
An Integrated Discrete-Time Delay-Compensating Technique for Large-Array Beamformers This paper implements a wide aperture high-resolution true time delay for frequency-uniform beamforming gain in large-scale phased arrays. We propose a baseband discrete-time delay-compensating technique to augment the conventional phase-shift-based analog or hybrid beamformers. A generalized design methodology is first developed to compare delay-compensating analog or hybrid beamforming architecture with their digital counterpart for a given number of antenna elements, modulation bandwidth, ADC dynamic range, and delay resolution. This paper shows that delay-compensating analog or hybrid beamformers are more energy-efficient for high dynamic-range applications compared to true-time-delay digital beamformers. To demonstrate the feasibility of our proposed technique, a four-element analog delay-compensating baseband beamformer in 65-nm CMOS is prototyped. A time-interleaved switched-capacitor array implements the discrete-time delay-compensating beamformer with a wide delay range of 15-ns and 5-ps resolution. Measured power consumption is 47 mW with frequency-uniform array gain over 100-MHz modulated bandwidth, independent of angle of arrival. The proposed delay compensation scheme is scalable to accommodate the delay differences for large antenna arrays with higher range/resolution ENOB compared with prior art.
A Full-Duplex Receiver With True-Time-Delay Cancelers Based on Switched-Capacitor-Networks Operating Beyond the Delay–Bandwidth Limit Wideband self-interference cancellation (SIC) in full-duplex (FD) radios requires the achievement of large delays to accurately emulate the SI channel. However, compact, power-efficient, low-loss/noise/distortion nanosecond-scale delays are extremely challenging to achieve on silicon. Passive transmission lines on silicon are lossy and area-intensive and exhibit reduced bandwidths when miniaturize...
Integrated Wideband Self-Interference Cancellation in the RF Domain for FDD and Full-Duplex Wireless A fully integrated technique for wideband cancellation of transmitter (TX) self-interference (SI) in the RF domain is proposed for multiband frequency-division duplexing (FDD) and full-duplex (FD) wireless applications. Integrated wideband SI cancellation (SIC) in the RF domain is accomplished through: 1) a bank of tunable, reconfigurable second-order high-Q RF bandpass filters in the canceller th...
Tunable High-Q N-Path Band-Pass Filters: Modeling and Verification. A differential single-port switched-RC N-path filter with band-pass characteristic is proposed. The switching frequency defines the center frequency, while the RC-time and duty cycle of the clock define the bandwidth. This allows for high-Q highly tunable filters which can for instance be useful for cognitive radio. Using a linear periodically time-variant (LPTV) model, exact expressions for the filter transfer function are derived. The behavior of the circuit including non-idealities such as maximum rejection, spectral aliasing, noise and effects due to mismatch in the paths is modeled and verified via measurements. A simple RLC equivalent circuit is provided, modeling bandwidth, quality factor and insertion loss of the filter. A 4-path architecture is realized in 65 nm CMOS. An off-chip transformer acts as a balun, improves filter-Q and realizes impedance matching. The differential architecture reduces clock-leakage and suppresses selectivity around even harmonics of the clock. The filter has a constant -3 dB bandwidth of 35 MHz and can be tuned from 100 MHz up to 1 GHz. Over the whole band, IIP3 is better than 14 dBm, P-1dB = 2 dBm and the noise figure is 3-5 dB, while the power dissipation increases from 2 mW to 16 mW (only clocking power).
Differential Power Analysis . Cryptosystem designers frequently assume that secrets willbe manipulated in closed, reliable computing environments. Unfortunately,actual computers and microchips leak information about the operationsthey process. This paper examines specific methods for analyzingpower consumption measurements to find secret keys from tamperresistant devices. We also discuss approaches for building cryptosystemsthat can operate securely in existing hardware that leaks information.Keywords:...
Network-based robust H∞ control of systems with uncertainty This paper is concerned with the design of robust H"~ controllers for uncertain networked control systems (NCSs) with the effects of both the network-induced delay and data dropout taken into consideration. A new analysis method for H"~ performance of NCSs is provided by introducing some slack matrix variables and employing the information of the lower bound of the network-induced delay. The designed H"~ controller is of memoryless type, which can be obtained by solving a set of linear matrix inequalities. Numerical examples and simulation results are given finally to illustrate the effectiveness of the method.
Bayesian learning in social networks We extend the standard model of social learning in two ways. First, we introduce a social network and assume that agents can only observe the actions of agents to whom they are connected by this network. Secondly, we allow agents to choose a different action at each date. If the network satisfies a connectedness assumption, the initial diversity resulting from diverse private information is eventually replaced by uniformity of actions, though not necessarily of beliefs, in finite time with probability one. We look at particular networks to illustrate the impact of network architecture on speed of convergence and the optimality of absorbing states. Convergence is remarkably rapid, so that asymptotic results are a good approximation even in the medium run.
An architecture for survivable coordination in large distributed systems Coordination among processes in a distributed system can be rendered very complex in a large-scale system where messages may be delayed or lost and when processes may participate only transiently or behave arbitrarily, e.g., after suffering a security breach. In this paper, we propose a scalable architecture to support coordination in such extreme conditions. Our architecture consists of a collection of persistent data servers that implement simple shared data abstractions for clients, without trusting the clients or even the servers themselves. We show that, by interacting with these untrusted servers, clients can solve distributed consensus, a powerful and fundamental coordination primitive. Our architecture is very practical and we describe the implementation of its main components in a system called Fleet.
Minimum-Cost Data Delivery in Heterogeneous Wireless Networks With various wireless technologies developed, a ubiquitous and integrated architecture is envisioned for future wireless communication. An important optimization issue in such an integrated system is how to minimize the overall communication cost by intelligently utilizing the available heterogeneous wireless technologies while, at the same time, meeting the quality-of-service requirements of mobi...
Quadrature Bandpass Sampling Rules for Single- and Multiband Communications and Satellite Navigation Receivers In this paper, we examine how existing rules for bandpass sampling rates can be applied to quadrature bandpass sampling. We find that there are significantly more allowable sampling rates and that the minimum rate can be reduced.
Neuropixels Data-Acquisition System: A Scalable Platform for Parallel Recording of 10,000+ Electrophysiological Signals. Although CMOS fabrication has enabled a quick evolution in the design of high-density neural probes and neural-recording chips, the scaling and miniaturization of the complete data-acquisition systems has happened at a slower pace. This is mainly due to the complexity and the many requirements that change depending on the specific experimental settings. In essence, the fundamental challenge of a n...
1.1
0.1
0.1
0.1
0.1
0.05
0.016667
0
0
0
0
0
0
0
A Multilateral Transactive Energy Framework of Hybrid Charging Stations for Low-Carbon Energy-Transport Nexus This article proposes a multilateral multienergy trading framework for synergetic hydrogen (H <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sub> ) and electricity transactions among renewable-dominated hybrid charging stations (HCSs). In this framework, each autonomous HCS with various renewable energy resource (RES) endowment can harvest local renewables for internal green H <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sub> and electricity generation to simultaneously meet demands of electric vehicles (EVs) and hydrogen-powered vehicles (HVs) from the transportation network. The surplus electricity/H <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sub> production of the HCS is accommodated by external multilateral transactions to increase the additional profit. Besides, each HCS is modeled as a sustainable energy hub, and multiple hubs with multienergy transactions contribute toward a low-carbon energy-transport nexus. A partial differential equation model based on fluid dynamic theory is formed to capture the temporal and spatial dynamics of traffic flows for estimating the EV/HV loads at HCSs. Furthermore, a distributed multilateral pricing algorithm is developed to iteratively derive the optimal prices and quantities for transactive electricity and H <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sub> . Comparative studies corroborate the superiority of the proposed methodology on economic merits and RES accommodation.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Store-and-Forward Buffer Requirements in a Packet Switching Network Previous analytic models for packet switching networks have always assumed infinite storage capacity in store-store-and-forward (S/F) nodes. In this paper, we relax this assumption and present a model for a packet switching network in which each node has a finite pool of S/F buffers. A packet arriving at a node in which all S/F buffers are temporarily filled is discarded. The channel transmission control mechanisms of positive acknowledgment and time-out of packets are included in this model. Individual S/F nodes are analyzed separately as queueing networks with different classes of packets. The single node results are interfaced by imposing a continuity of flow constraint. A heuristic algorithm for determining a balanced assignment of nodal S/F buffer capacities is proposed. Numerical results for the performance of a 19 node network are illustrated.
Hardware-Assisted Detection of Malicious Software in Embedded Systems One of the critical security threats to computer systems is the execution of malware or malicious software. Several intrusion detection systems have been proposed which perform detection analysis in the software using the audit files generated by the operating system. Software-based solutions to this problem are relatively slow, so these techniques can be used forensically, but not in real-time to stop an exploit before it has an opportunity to do damage. We present a technique to implement intrusion detection for secure embedded systems by detecting behavioral differences between the correct system and the malware. The system is implemented using FPGA logic to enable the detection process to be regularly updated to adapt to new malware and changing system behavior.
CoRQ: Enabling Runtime Reconfiguration Under WCET Guarantees for Real-Time Systems. Real-time systems have an increasing demand for predictable performance. Only recently novel models and analyses were proposed that make the performance benefits of runtime-reconfigurable architectures accessible for optimized worst-case execution time (WCET) guarantees. However, the implicit assumption in these works is that the process of reconfiguration itself complies with execution time guara...
FPGA-Centric Design Process for Avionic Simulation and Test. Real-time computing systems are increasingly used in aerospace and avionic industries. In the face of power challenge, performance requirements and demands for higher flexibility, hardware designers are directed toward reconfigurable computing using field programmable gate arrays (FPGAs) that offer high computation rates per watt and adaptability to the application constraints. However, considerin...
Design and implementation of Performance Analysis Unit (PAU) for AXI-based multi-core System on Chip (SOC) With the rapid development of semiconductor technology, more complicated systems have been integrated into single chips. However, system performance is not increased in proportion to the gate-count of the system. This is mainly because the optimized design of the system becomes more difficult as the systems become more complicated. Therefore, it is essential to understand the internal behavior of the system and utilize the system resources effectively in the System on Chip (SOC) design. In this paper, we design a Performance Analysis Unit (PAU) for monitoring the AMBA Advanced eXtensible Interface (AXI) bus as a mechanism to investigate the internal and dynamic behavior of an SOC, especially for internal bus activities. A case study with the PAU for an H.264 decoder application is also presented to show how the PAU is utilized in SOC platform. The PAU has the capability to measure major system performance metrics, such as bus latency, amount of bus traffic, contention between master/slave devices, and bus utilization for specific durations. This paper also presents a distributor and synchronization method to connect multiple PAUs to monitor multiple internal buses of large SOC.
Aker: A Design and Verification Framework for Safe and Secure SoC Access Control Modern systems on a chip (SoCs) utilize heterogeneous architectures where multiple IP cores have concurrent access to on-chip shared resources. In security-critical applications, IP cores have different privilege levels for accessing shared resources, which must be regulated by an access control system. Aker is a design and verification framework for SoC access control. Aker builds upon the Access...
Tapestry: a resilient global-scale overlay for service deployment We present Tapestry, a peer-to-peer overlay routing infrastructure offering efficient, scalable, location-independent routing of messages directly to nearby copies of an object or service using only localized resources. Tapestry supports a generic decentralized object location and routing applications programming interface using a self-repairing, soft-state-based routing layer. The paper presents the Tapestry architecture, algorithms, and implementation. It explores the behavior of a Tapestry deployment on PlanetLab, a global testbed of approximately 100 machines. Experimental results show that Tapestry exhibits stable behavior and performance as an overlay, despite the instability of the underlying network layers. Several widely distributed applications have been implemented on Tapestry, illustrating its utility as a deployment infrastructure.
A Low-Power Fast-Transient 90-nm Low-Dropout Regulator With Multiple Small-Gain Stages A power-efficient 90-nm low-dropout regulator (LDO) with multiple small-gain stages is proposed in this paper. The proposed channel-resistance-insensitive small-gain stages provide loop gain enhancements without introducing low-frequency poles before the unity-gain frequency (UGF). As a result, both the loop gain and bandwidth of the LDO are improved, so that the accuracy and response speed of voltage regulation are significantly enhanced. As no on-chip compensation capacitor is required, the active chip area of the LDO is only 72.5 μm × 37.8 μm. Experimental results show that the LDO is capable of providing an output of 0.9 V with maximum output current of 50 mA from a 1-V supply. The LDO has a quiescent current of 9.3 μA, and has significantly improvement in line and load transient responses as well as performance in power-supply rejection ratio (PSRR).
Energy-Efficient Communication Protocol for Wireless Microsensor Networks Wireless distributed micro-sensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks.Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multihop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster base stations (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic net-works, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show that LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional routing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.
Type-2 Fuzzy Sets and Systems: An Overview [corrected reprint] As originally published in the February 2007 issue of IEEE Computational Intelligence Magazine, the above titled paper (ibid., vol. 2, no. 1, pp. 20-29, Feb 07) contained errors in mathematics that were introduced by the publisher. The corrected version is reprinted in its entirety.
Linear Amplification with Nonlinear Components A technique for producing bandpass linear amplification with nonlinear components (LINC) is described. The bandpass signal first is separated into two constant envelope component signals. All of the amplitude and phase information of the original bandpass signal is contained in phase modulation on the component signals. These constant envelope signals can be amplified or translated in frequency by amplifiers or mixers which have nonlinear input-output amplitude transfer characteristics. Passive linear combining of the amplified and/or translated component signals produces an amplified and/or translated replica of the original signal.
A MIMO decoder accelerator for next generation wireless communications In this paper, we present a multi-input-multi-output (MIMO) decoder accelerator architecture that offers versatility and reprogrammability while maintaining a very high performance-cost metric. The accelerator is meant to address the MIMO decoding bottlenecks associated with the convergence of multiple high-speed wireless standards onto a single device. It is scalable in the number of antennas, bandwidth, modulation format, and most importantly, present and emerging decoder algorithms. It features a Harvard-like architecture with complex vector operands and a deeply pipelined fixed-point complex arithmetic processing unit. When implemented on a Xilinx Virtex-4 LX200FF1513 field-programmable gate array (FPGA), the design occupied 43% of overall FPGA resources. The accelerator shows an advantage of up to three orders of magnitude (1000 times) in power-delay product for typical MIMO decoding operations relative to a general purpose DSP. When compared to dedicated application-specific IC (ASIC) implementations of mmse MIMO decoders, the accelerator showed a degradation of 340%-17%, depending on the actual ASIC being considered. In order to optimize the design for both speed and area, specific challenges had to be overcome. These include: definition of the processing units and their interconnection; proper dynamic scaling of the signal; and memory partitioning and parallelism.
3.4 A 36Gb/s PAM4 transmitter using an 8b 18GS/S DAC in 28nm CMOS At data rates beyond 10Gb/s, most wireline links employ NRZ signaling. Serial NRZ links as high as 56Gb/s and 60Gb/s have been reported [1]. Nevertheless, as the rate increases, the constraints imposed by the channel, package, and die become more severe and do not benefit from process scaling in the same fashion that circuit design does. Reflections from impedance discontinuities in the PCB and package caused by vias and connectors introduce significant signal loss and distortions at higher frequencies. Even with an ideal channel, at every package-die interface, there is an intrinsic parasitic capacitance due to the pads and the ESD circuit amounting to at least 150fF, and a 50Ω resistor termination at both the transmit and receive ends resulting in an intrinsic pole at 23GHz or lower. In light of all these limitations, serial NRZ signaling beyond 60Gb/s appears suboptimal in terms of both power and performance. Utilizing various modulation techniques such as PAM4, one can achieve a higher spectral efficiency [2]. To enable such transmission formats, high-speed moderate-resolution data converters are required. This paper describes a 36Gb/s transmitter based on an 18GS/s 8b DAC implemented in 28nm CMOS, compliant to the new IEEE802.3bj standard for 100G Ethernet over backplane and copper cables [3].
A Hybrid 1<sup>st</sup>/2<sup>nd</sup>-Order VCO-Based CTDSM With Rail-to-Rail Artifact Tolerance for Bidirectional Neural Interface Bi-directional brain machine interfaces enable simultaneous brain activity monitoring and neural modulation. However stimulation artifact can saturate instrumentation front-end, while concurrent on-site recording is needed. This brief presents a voltage controlled-oscillator (VCO) based continuous-time <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\rm \Delta \Sigma $ </tex-math></inline-formula> modulator (CTDSM) with rail-to-rail input range and fast artifact tracking. A hybrid <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}/2^{nd}$ </tex-math></inline-formula> -order loop is designed to achieve high dynamic range (DR) and large input range. Stimulation artifact is detected by a phase counter and compensated by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}$ </tex-math></inline-formula> -order loop. The residue signal is digitized by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2^{nd}$ </tex-math></inline-formula> -order loop for high precision. Redundancy between the two loops is implemented as feedback capacitors elements with non-binary ratio to guarantee feedback stability and linearity. Fabricated in a 55-nm CMOS process, the prototype achieves 65.7dB SNDR across 10 kHz bandwidth with a full scale of 200 mV <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">pp</sub> . And a ±1.2 V input range is achieved to suppress artifacts. Saline based experiment with simultaneous stimulation and recording demonstrates that the implemented system can track and tolerate rail-to-rail stimulation artifact within 30 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{s}$ </tex-math></inline-formula> while small neural signal can be continuously monitored.
1.2
0.2
0.2
0.2
0.1
0.066667
0
0
0
0
0
0
0
0
Side-Channel Attacks on Cryptographic Software When it comes to cryptographic software, side channels are an often-overlooked threat. A side channel is any observable side effect of computation that an attacker could measure and possibly influence. In the software world, side-channel attacks have sometimes been dismissed as impractical. However, new system architecture features, such as larger cache sizes and multicore processors, have increased the prevalence of side channels and quality of measurement available to an attacker. This article explains three recent side-channel attacks on cryptographic software, exploiting a comparison function, CPU cache timing, and branch prediction logic to recover a secret key. Software developers must be aware of the potential for side-channel attacks and plan appropriately.
TILE64 - Processor: A 64-Core SoC with Mesh Interconnect The TILE64TM processor is a multicore SoC targeting the high-performance demands of a wide range of embedded applications across networking and digital multimedia applications. A figure shows a block diagram with 64 tile processors arranged in an 8x8 array. These tiles connect through a scalable 2D mesh network with high-speed I/Os on the periphery. Each general-purpose processor is identical and capable of running SMP Linux.
Dynamic adaptive virtual core mapping to improve power, energy, and performance in multi-socket multicores Consider a multithreaded parallel application running inside a multicore virtual machine context that is itself hosted on a multi-socket multicore physical machine. How should the VMM map virtual cores to physical cores? We compare a local mapping, which compacts virtual cores to processor sockets, and an interleaved mapping, which spreads them over the sockets. Simply choosing between these two mappings exposes clear tradeoffs between performance, energy, and power. We then describe the design, implementation, and evaluation of a system that automatically and dynamically chooses between the two mappings. The system consists of a set of efficient online VMM-based mechanisms and policies that (a) capture the relevant characteristics of memory reference behavior, (b) provide a policy and mechanism for configuring the mapping of virtual machine cores to physical cores that optimizes for power, energy, or performance, and (c) drive dynamic migrations of virtual cores among local physical cores based on the workload and the currently specified objective. Using these techniques we demonstrate that the performance of SPEC and PARSEC benchmarks can be increased by as much as 66%, energy reduced by as much as 31%, and power reduced by as much as 17%, depending on the optimization objective.
Assembly Of Long Error-Prone Reads Using De Bruijn Graphs The recent breakthroughs in assembling long error-prone reads were based on the overlap-layout-consensus (OLC) approach and did not utilize the strengths of the alternative de Bruijn graph approach to genome assembly. Moreover, these studies often assume that applications of the de Bruijn graph approach are limited to short and accurate reads and that the OLC approach is the only practical paradigm for assembling long error-prone reads. We show how to generalize de Bruijn graphs for assembling long error-prone reads and describe the ABruijn assembler, which combines the de Bruijn graph and the OLC approaches and results in accurate genome reconstructions.
Make the Most out of Last Level Cache in Intel Processors In modern (Intel) processors, Last Level Cache (LLC) is divided into multiple slices and an undocumented hashing algorithm (aka Complex Addressing) maps different parts of memory address space among these slices to increase the effective memory bandwidth. After a careful study of Intel's Complex Addressing, we introduce a slice-aware memory management scheme, wherein frequently used data can be accessed faster via the LLC. Using our proposed scheme, we show that a key-value store can potentially improve its average performance ~12.2% and ~11.4% for 100% & 95% GET workloads, respectively. Furthermore, we propose CacheDirector, a network I/O solution which extends Direct Data I/O (DDIO) and places the packet's header in the slice of the LLC that is closest to the relevant processing core. We implemented CacheDirector as an extension to DPDK and evaluated our proposed solution for latency-critical applications in Network Function Virtualization (NFV) systems. Evaluation results show that CacheDirector makes packet processing faster by reducing tail latencies (90-99th percentiles) by up to 119 μs (~21.5%) for optimized NFV service chains that are running at 100 Gbps. Finally, we analyze the effectiveness of slice-aware memory management to realize cache isolation.
On-Chip Interconnection Architecture of the Tile Processor iMesh, the Tile Processor Architecture's on-chip interconnection network, connects the multicore processor's tiles with five 2D mesh networks, each specialized for a different use. Taking advantage of the five networks, the c-based iLib interconnection library efficiently maps program communication across the on-chip interconnect. The Tile Processor's first implementation, the TILE64, contains 64 cores and can execute 192 billion 32-bit operations per second at 1 GHz.
Cross-Tenant Side-Channel Attacks in PaaS Clouds We present a new attack framework for conducting cache-based side-channel attacks and demonstrate this framework in attacks between tenants on commercial Platform-as-a-Service (PaaS) clouds. Our framework uses the FLUSH-RELOAD attack of Gullasch et al. as a primitive, and extends this work by leveraging it within an automaton-driven strategy for tracing a victim's execution. We leverage our framework first to confirm co-location of tenants and then to extract secrets across tenant boundaries. We specifically demonstrate attacks to collect potentially sensitive application data (e.g., the number of items in a shopping cart), to hijack user accounts, and to break SAML single sign-on. To the best of our knowledge, our attacks are the first granular, cross-tenant, side-channel attacks successfully demonstrated on state-of-the-art commercial clouds, PaaS or otherwise.
InvisiSpec - Making Speculative Execution Invisible in the Cache Hierarchy. Hardware speculation offers a major surface for micro-architectural covert and side channel attacks. Unfortunately, defending against speculative execution attacks is challenging. The reason is that speculations destined to be squashed execute incorrect instructions, outside the scope of what programmers and compilers reason about. Further, any change to micro-architectural state made by speculative execution can leak information. In this paper, we propose InvisiSpec, a novel strategy to defend against hardware speculation attacks in multiprocessors by making speculation invisible in the data cache hierarchy. InvisiSpec blocks micro-architectural covert and side channels through the multiprocessor data cache hierarchy due to speculative loads. In InvisiSpec, unsafe speculative loads read data into a speculative buffer, without modifying the cache hierarchy. When the loads become safe, InvisiSpec makes them visible to the rest of the system. InvisiSpec identifies loads that might have violated memory consistency and, at this time, forces them to perform a validation step. We propose two InvisiSpec designs: one to defend against Spectre-like attacks and another to defend against futuristic attacks, where any speculative load may pose a threat. Our simulations with 23 SPEC and 10 PARSEC workloads show that InvisiSpec is effective. Under TSO, using fences to defend against Spectre attacks slows down execution by 74% relative to a conventional, insecure processor; InvisiSpec reduces the execution slowdown to only 21%. Using fences to defend against futuristic attacks slows down execution by 208%; InvisiSpec reduces the slowdown to 72%.
Combining control-flow integrity and static analysis for efficient and validated data sandboxing In many software attacks, inducing an illegal control-flow transfer in the target system is one common step. Control-Flow Integrity (CFI) protects a software system by enforcing a pre-determined control-flow graph. In addition to providing strong security, CFI enables static analysis on low-level code. This paper evaluates whether CFI-enabled static analysis can help build efficient and validated data sandboxing. Previous systems generally sandbox memory writes for integrity, but avoid protecting confidentiality due to the high overhead of sandboxing memory reads. To reduce overhead, we have implemented a series of optimizations that remove sandboxing instructions if they are proven unnecessary by static analysis. On top of CFI, our system adds only 2.7% runtime overhead on SPECint2000 for sandboxing memory writes and adds modest 19% for sandboxing both reads and writes. We have also built a principled data-sandboxing verifier based on range analysis. The verifier checks the safety of the results of the optimizer, which removes the need to trust the rewriter and optimizer. Our results show that the combination of CFI and static analysis has the potential of bringing down the cost of general inlined reference monitors, while maintaining strong security.
Self-stabilizing systems in spite of distributed control The synchronization task between loosely coupled cyclic sequential processes (as can be distinguished in, for instance, operating systems) can be viewed as keeping the relation “the system is in a legitimate state” invariant. As a result, each individual process step that could possibly cause violation of that relation has to be preceded by a test deciding whether the process in question is allowed to proceed or has to be delayed. The resulting design is readily—and quite systematically—implemented if the different processes can be granted mutually exclusive access to a common store in which “the current system state” is recorded.
Finite-Time Stability of Continuous Autonomous Systems Finite-time stability is defined for equilibria of continuous but non-Lipschitzian autonomous systems. Continuity, Lipschitz continuity, and Hölder continuity of the settling-time function are studied and illustrated with several examples. Lyapunov and converse Lyapunov results involving scalar differential inequalities are given for finite-time stability. It is shown that the regularity properties of the Lyapunov function and those of the settling-time function are related. Consequently, converse Lyapunov results can only assure the existence of continuous Lyapunov functions. Finally, the sensitivity of finite-time-stable systems to perturbations is investigated.
Computing symmetric boolean functions by circuits with few exact threshold gates We consider constant depth circuits augmented with few exact threshold gates with arbitrary weights. We prove strong (up to exponential) size lower bounds for such circuits computing symmetric Boolean functions. Our lower bound is expressed in terms of a natural parameter, the balance, of symmetric functions. Furthermore, in the quasi-polynomial size setting our results provides an exact characterization of the class of symmetric functions in terms of their balance.
A Hybrid Threshold Self-Compensation Rectifier For Rf Energy Harvesting This paper presents a novel highly efficient 5-stage RF rectifier in SMIC 65 nm standard CMOS process. To improve power conversion efficiency (PCE) and reduce the minimum input voltage, a hybrid threshold self-compensation approach is applied in this proposed RF rectifier, which combines the gate-bias threshold compensation with the body-effect compensation. The proposed circuit uses PMOSFET in all the stages except for the first stage to allow individual body-bias, which eliminates the need for triple-well technology. The presented RF rectifier exhibits a simulated maximum PCE of 30% at -16.7dBm (20.25 mu W) and produces 1.74V across 0.5 M Omega load resistance. In the circumstances of 1 M Omega load resistance, it outputs 1.5 V DC voltage from a remarkably low input power level of -20.4 dBm (9 mu W) RF input power with PCE of about 25%.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.102334
0.1
0.1
0.1
0.1
0.033334
0.008062
0.000502
0.000011
0
0
0
0
0
Exposing Software Defined Radio Functionality To Native Operating System Applications via Virtual Devices Many reconfigurable platforms require that applications be written specifically to take advantage of the reconfigurable hardware. In a PC-based environment, this presents an undesirable constraint in that the many already available applications cannot leverage on such hardware. Greatest benefit can only be derived from reconfigurable devices if even native OS applications can transparently utilize reconfigurable devices as they would normal full-fledged hardware devices. This paper presents how Proteus Virtual Devices are used to expose reconfigurable hardware in a transparent manner for use by typical native OS applications.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Fractional-N Sub-Sampling PLL using a Pipelined Phase-Interpolator With an FoM of -250 dB. A fractional-N sub-sampling PLL architecture based on pipelined phase-interpolator and Digital-to-Time-Converter (DTC) is presented in this paper. The combination of pipelined phase-interpolator and DTC enables efficient design of the multi-phase generation mechanism required for the fractional operation. This technique can be used for designing a fractional-N PLL with low in-band phase noise and ...
A 2.9–4.0-GHz Fractional-N Digital PLL With Bang-Bang Phase Detector and 560- Integrated Jitter at 4.5-mW Power This paper introduces a ΔΣ fractional-N digital PLL based on a single-bit TDC. A digital-to-time converter, placed in the feedback path, cancels out the quantization noise introduced by the dithering of the frequency divider modulus and permits to achieve low noise at low power. The PLL is implemented in a standard 65-nm CMOS process. It achieves - 102-dBc/Hz phase noise at 50-kHz offset and a total absolute jitter below 560 fsrms (integrated from 3 kHz to 30 MHz), even in the worst-case of a -42-dBc in-band fractional spur. The synthesizer tuning range spans from 2.92 GHz to 4.05 GHz with 70-Hz resolution. The total power consumption is 4.5 mW, which leads to the best jitter-power trade-off obtained with a fractional-N synthesizer. The synthesizer demonstrates the capability of frequency modulation up to 1.25-Mb/s data rate.
A 9.2–12.7 GHz Wideband Fractional-N Subsampling PLL in 28 nm CMOS With 280 fs RMS Jitter This paper describes a fractional-N subsampling PLL in 28 nm CMOS. Fractional phase lock is made possible with almost no penalty in phase noise performance thanks to the use of a 10 bit, 0.55 ps/LSB digital-to-time converter (DTC) circuit operating on the sampling clock. The performance limitations of a practical DTC implementation are considered, and techniques for minimizing these limitations are presented. For example, background calibration guarantees appropriate DTC gain, reducing spurs. Operating at 10 GHz the system achieves −38 dBc of integrated phase noise (280 fs RMS jitter) when a worst case fractional spur of −43 dBc is present. In-band phase noise is at the level of −104 dBc/Hz. The class-B VCO can be tuned from 9.2 GHz to 12.7 GHz (32%). The total power consumption of the synthesizer, including the VCO, is 13 mW from 0.9 V and 1.8 V supplies.
A Low Noise Sub-Sampling PLL in Which Divider Noise is Eliminated and PD/CP Noise is Not Multiplied by $N ^{2}$ This paper presents a 2.2-GHz low jitter sub-sampling based PLL. It uses a phase-detector/charge-pump (PD/CP) that sub-samples the VCO output with the reference clock. In contrast to what happens in a classical PLL, the PD/CP noise is not multiplied by N 2 in this sub-sampling PLL, resulting in a low noise contribution from the PD/CP. Moreover, no frequency divider is needed in the locked state an...
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Cache operations by MRU change The performance of set associative caches is analyzed. The method used is to group the cache lines into regions according to their positions in the replacement stacks of a cache, and then to observe how the memory access of a CPU is distributed over these regions. Results from the preserved CPU traces show that the memory accesses are heavily concentrated on the most recently used (MRU) region in the cache. The concept of MRU change is introduced; the idea is to use the event that the CPU accesses a non-MRU line to approximate the time the CPU is changing its working set. The concept is shown to be useful in many aspects of cache design and performance evaluation, such as comparison of various replacement algorithms, improvement of prefetch algorithms, and speedup of cache simulation.
Achievable rates in cognitive radio channels Cognitive radio promises a low-cost, highly flexible alternative to the classic single-frequency band, single-protocol wireless device. By sensing and adapting to its environment, such a device is able to fill voids in the wireless spectrum and can dramatically increase spectral efficiency. In this paper, the cognitive radio channel is defined as a two-sender, two-receiver interference channel in which sender 2 obtains the encoded message sender 1 plans to transmit. We consider two cases: in the genie-aided cognitive radio channel, sender 2 is noncausally presented the data to be transmitted by sender 1 while in the causal cognitive radio channel, the data is obtained causally. The cognitive radio at sender 2 may then choose to transmit simultaneously over the same channel, as opposed to waiting for an idle channel as is traditional for a cognitive radio. Our main result is the development of an achievable region which combines Gel'fand-Pinkser coding with an achievable region construction for the interference channel. In the additive Gaussian noise case, this resembles dirty-paper coding, a technique used in the computation of the capacity of the Gaussian multiple-input multiple-output (MIMO) broadcast channel. Numerical evaluation of the region in the Gaussian noise case is performed, and compared to an inner bound, the interference channel, and an outer bound, a modified Gaussian MIMO broadcast channel. Results are also extended to the case in which the message is causally obtained.
Gradient-Based Learning Applied to Document Recognition Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper rev...
Understanding Availability This paper addresses a simple, yet fundamental question in the design of peer-to-peer systems: What does it mean when we say "availability" and how does this understand- ing impact the engineering of practical systems? We ar- gue that existing measurements and models do not capture the complex time-varying nature of availability in today's peer-to-peer environments. Further, we show that unfore- seen methodological shortcomings have dramatically biased previous analyses of this phenomenon. As the basis of our study, we empirically characterize the availability of a large peer-to-peer system over a period of 7 days, analyze the de- pendence of the underlying availability distributions, mea- sure host turnover in the system, and discuss how these re- sults may affect the design of high-availability peer-to-peer services.
Data Space Randomization Over the past several years, US-CERT advisories, as well as most critical updates from software vendors, have been due to memory corruption vulnerabilities such as buffer overflows, heap overflows, etc. Several techniques have been developed to defend against the exploitation of these vulnerabilities, with the most promising defenses being based on randomization. Two randomization techniques have been explored so far: address space randomization (ASR) that randomizes the location of objects in virtual memory, and instruction set randomization (ISR) that randomizes the representation of code. We explore a third form of randomization called data space randomization (DSR) that randomizes the representation of data stored in program memory. Unlike ISR, DSR is effective against non-control data attacks as well as code injection attacks. Unlike ASR, it can protect against corruption of non-pointer data as well as pointer-valued data. Moreover, DSR provides a much higher range of randomization (typically 232 for 32-bit data) as compared to ASR. Other interesting aspects of DSR include (a) it does not share a weakness common to randomization-based defenses, namely, susceptibility to information leakage attacks, and (b) it is capable of detecting some exploits that are missed by full bounds-checking techniques, e.g., some of the overflows from one field of a structure to the next field. Our implementation results show that with appropriate design choices, DSR can achieve a performance overhead in the range of 5% to 30% for a range of programs.
Online design bug detection: RTL analysis, flexible mechanisms, and evaluation Higher level of resource integration and the addition of new features in modern multi-processors put a significant pressure on their verification. Although a large amount of resources and time are devoted to the verification phase of modern processors, many design bugs escape the verification process and slip into processors operating in the field. These design bugs often lead to lower quality products, lower customer satisfaction, diminishing brand/company reputation, or even expensive product recalls.
IEEE 802.11 wireless LAN implemented on software defined radio with hybrid programmable architecture This paper describes a prototype software defined radio (SDR) transceiver on a distributed and heterogeneous hybrid programmable architecture; it consists of a central processing unit (CPU), digital signal processors (DSPs), and pre/postprocessors (PPPs), and supports both Personal Handy Phone System (PHS), and IEEE 802.11 wireless local area network (WLAN). It also supports system switching between PHS and WLAN and over-the-air (OTA) software downloading. In this paper, we design an IEEE 802.11 WLAN around the SDR; we show the software architecture of the SDR prototype and describe how it handles the IEEE 802.11 WLAN protocol. The medium access control (MAC) sublayer functions are executed on the CPU, while the physical layer (PHY) functions such as modulation/demodulation are processed by the DSPs; higher speed digital signal processes are run on the PPP implemented on a field-programmable gate array (FPGA). The most difficult problem in implementing the WLAN in this way is meeting the short interframe space (SIFS) requirement of the IEEE 802.11 standard; we elucidate the potential weakness of the current configuration and specify a way of implementing the IEEE 802.11 protocol that avoids this problem. This paper also describes an experimental evaluation of the prototype for WLAN use, the results of which agree well with computer-simulation results.
Understanding contention-based channels and using them for defense Microarchitectural resources such as caches and predictors can be used to leak information across security domains. Significant prior work has demonstrated attacks and defenses for specific types of such microarchitectural side and covert channels. In this paper, we introduce a general mathematical study of microarchitectural channels using information theory. Our conceptual contribution is a simple mathematical abstraction that captures the common characteristics of all microarchitectural channels. We call this the Bucket model and it reveals that microarchitectural channels are fundamentally different from side and covert channels in networking. We then quantify the communication capacity of several microarchitectural covert channels (including channels that rely on performance counters, AES hardware and memory buses) and measure bandwidths across both KVM based heavy-weight virtualization and light-weight operating-system level isolation. We demonstrate channel capacities that are orders of magnitude higher compared to what was previously considered possible. Finally, we introduce a novel way of detecting intelligent adversaries that try to hide while running covert channel eavesdropping attacks. Our method generalizes a prior detection scheme (that modeled static adversaries) by introducing noise that hides the detection process from an intelligent eavesdropper.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.2
0.05
0.04
0.018182
0
0
0
0
0
0
0
0
0
0
A Wideband Inductorless dB-Linear Automatic Gain Control Amplifier Using a Single-Branch Negative Exponential Generator for Wireline Applications. This paper reports a wideband inductorless automatic gain control (AGC) amplifier for wireline applications. To realize a dB-linear AGC range, a pseudo-folded Gilbert cell driven by a single-branch negative exponential generator (NEG) is proposed as the core variable-gain amplifier. The NEG features a composite of dual Taylor series to extend the AGC approximation range without sacrificing the pre...
A Low-Power 26-GHz Transformer-Based Regulated Cascode SiGe BiCMOS Transimpedance Amplifier Low-power high-speed optical receivers are required to meet the explosive growth in data communication systems. This paper presents a 26 GHz transimpedance amplifier (TIA) that employs a transformer-based regulated cascode (RGC) input stage which provides passive negative-feedback gain that enhances the effective transconductance of the TIA's input common-base transistor; reducing the input resistance and isolating the parasitic photodiode capacitance. This allows for considerable bandwidth extension without significant noise degradation or power consumption. Further bandwidth extension is achieved through series inductive peaking to isolate the photodetector capacitance from the TIA input. The optimum choice of series inductive peaking value and key transformer parameters for bandwidth extension and jitter minimization is analyzed. Fabricated in a 0.25-µm SiGe BiCMOS technology and tested with an on-chip 150 fF capacitor to emulate a photodiode, the TIA achieves a 53 dBΩ single-ended transimpedance gain with a 26 GHz bandwidth and 21.3 pA/ average input-referred noise current spectral density. Total chip power including output buffering is 28.2 mW from a 2.5 V supply, with the core TIA consuming 8.2 mW, and the chip area including pads is 960 µm x 780 µm.
A 32-Gb/s 3.53-mW/Gb/s Adaptive Receiver AFE Employing a Hybrid CTLE, Edge-DFE and Merged Data-DFE/CDR in 65-nm CMOS A 32-Gb/s adaptive receiver analog front-end (AFE) with a hybrid continuous-time linear equalizer (CTLE), a half-rate distributed edge and data decision feedback equalizer (DFE) and a clock data recovery (CDR) is presented. The hybrid CTLE counters the low-frequency as well as the high-frequency loss of 21 dB at Nyquist. Further post-cursors can be solved by using a half- rate, distributed 3-tap edge-DFE and 2-tap data-DFE, which is partially embedded in the CDR. The distributed DFE scheme addresses the inter symbol interference (ISI) at the edge and reduces the data jitter while data-DFE guarantees the vertical opening of the data eye. Fabricated in 65-nm CMOS, occupying an active area of 0.3 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> , the proposed prototype demonstrates an improvement of 0.15 UI in the horizontal eye opening of the data output at a receiver AFE with the conventional 5-tap data-DFE at BER=10 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">-12</sup> , under a pseudorandom binary sequence (PRBS) of 2 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">31</sup> -1. The competitive power efficiency of 3.53 mW/Gb/s is measured with a supply voltage of 1.2 V.
A 100-Gb/s PAM-4 Optical Receiver With 2-Tap FFE and 2-Tap Direct-Feedback DFE in 28-nm CMOS Optical receivers (ORXs) with integrated CMOS electronics enable compact, low-power solutions for 400-G Ethernet and co-packaged optics. In this article, we present a 100-Gb/s PAM-4 ORX with TIA and sampler integrated into a single 28-nm CMOS IC. ORX sensitivity is optimized using a low noise, sub-Nyquist bandwidth TIA followed by a mixed signal sampler that includes 2-tap FFE and 2-tap DFE. A dis...
A 32 Gb/s, 4.7 pJ/bit Optical Link With -11.7 dBm Sensitivity in 14-nm FinFET CMOS. This paper presents a 32 Gb/s non-return-to-zero optical link using 850-nm vertical-cavity surface-emitting laser-based multi-mode optics with 14-nm bulk FinFET CMOS circuits. The target application is the integration of optics on to the first-level package, connecting high-speed optical I/O directly to an advanced CMOS host chip (e.g., processor and switch) to increase package I/O bandwidth densi...
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Cognitive radio: brain-empowered wireless communications Cognitive radio is viewed as a novel approach for improving the utilization of a precious natural resource: the radio electromagnetic spectrum. The cognitive radio, built on a software-defined radio, is defined as an intelligent wireless communication system that is aware of its environment and uses the methodology of understanding-by-building to learn from the environment and adapt to statistical variations in the input stimuli, with two primary objectives in mind: · highly reliable communication whenever and wherever needed; · efficient utilization of the radio spectrum. Following the discussion of interference temperature as a new metric for the quantification and management of interference, the paper addresses three fundamental cognitive tasks. 1) Radio-scene analysis. 2) Channel-state estimation and predictive modeling. 3) Transmit-power control and dynamic spectrum management. This work also discusses the emergent behavior of cognitive radio.
Planning as heuristic search In the AIPS98 Planning Contest, the hsp planner showed that heuristic search planners can be competitive with state-of-the-art Graphplan and sat planners. Heuristic search planners like hsp transform planning problems into problems of heuristic search by automatically extracting heuristics from Strips encodings. They differ from specialized problem solvers such as those developed for the 24-Puzzle and Rubik's Cube in that they use a general declarative language for stating problems and a general mechanism for extracting heuristics from these representations. In this paper, we study a family of heuristic search planners that are based on a simple and general heuristic that assumes that action preconditions are independent. The heuristic is then used in the context of best-first and hill-climbing search algorithms, and is tested over a large collection of domains. We then consider variations and extensions such as reversing the direction of the search for speeding node evaluation, and extracting information about propositional invariants for avoiding dead-ends. We analyze the resulting planners, evaluate their performance, and explain when they do best. We also compare the performance of these planners with two state-of-the-art planners, and show that the simplest planner based on a pure best-first search yields the most solid performance over a large set of problems. We also discuss the strengths and limitations of this approach, establish a correspondence between heuristic search planning and Graphplan, and briefly survey recent ideas that can reduce the current gap in performance between general heuristic search planners and specialized solvers.
Probabilistic neural networks By replacing the sigmoid activation function often used in neural networks with an exponential function, a probabilistic neural network (PNN) that can compute nonlinear decision boundaries which approach the Bayes optimal is formed. Alternate activation functions having similar properties are also discussed. A fourlayer neural network of the type proposed can map any input pattern to any number of classifications. The decision boundaries can be modified in real-time using new data as they become available, and can be implemented using artificial hardware “neurons” that operate entirely in parallel. Provision is also made for estimating the probability and reliability of a classification as well as making the decision. The technique offers a tremendous speed advantage for problems in which the incremental adaptation time of back propagation is a significant fraction of the total computation time. For one application, the PNN paradigm was 200,000 times faster than back-propagation.
Towards a higher-order synchronous data-flow language The paper introduces a higher-order synchronous data-flow language in which communication channels may themselves transport programs. This provides a mean to dynamically reconfigure data-flow processes. The language comes as a natural and strict extension of both lustre and lucy. This extension is conservative, in the sense that a first-order restriction of the language can receive the same semantics.We illustrate the expressivity of the language with some examples, before giving the formal semantics of the underlying calculus. The language is equipped with a polymorphic type system allowing types to be automatically inferred and a clock calculus rejecting programs for which synchronous execution cannot be statically guaranteed. To our knowledge, this is the first higher-order synchronous data-flow language where stream functions are first class citizens.
A survey of state and disturbance observers for practitioners This paper gives a unified and historical review of observer design for the benefit of practitioners. It is unified in the sense that all observers are examined in terms of: 1) the assumed dynamic structure of the plant; 2) the required information, including the input signals and modeling information of the plant; and 3) the implementation equation of the observer. This allows a practitioner, with a particular observer design problem in mind, to quickly find a suitable solution. The review is historical in the sense that it follows the evolution of ideas in observer design in the last half century. From the distinction in problem formulation, required modeling information and the observer design goal, we can see two schools of thought: one is developed in the framework of modern control theory; the other is based on disturbance estimation, which has been, to some extent, overlooked
The accelerator store: A shared memory framework for accelerator-based systems This paper presents the many-accelerator architecture, a design approach combining the scalability of homogeneous multi-core architectures and system-on-chip's high performance and power-efficient hardware accelerators. In preparation for systems containing tens or hundreds of accelerators, we characterize a diverse pool of accelerators and find each contains significant amounts of SRAM memory (up to 90&percnt; of their area). We take advantage of this discovery and introduce the accelerator store, a scalable architectural component to minimize accelerator area by sharing its memories between accelerators. We evaluate the accelerator store for two applications and find significant system area reductions (30&percnt;) in exchange for small overheads (2&percnt; performance, 0&percnt;--8&percnt; energy). The paper also identifies new research directions enabled by the accelerator store and the many-accelerator architecture.
3.4 A 36Gb/s PAM4 transmitter using an 8b 18GS/S DAC in 28nm CMOS At data rates beyond 10Gb/s, most wireline links employ NRZ signaling. Serial NRZ links as high as 56Gb/s and 60Gb/s have been reported [1]. Nevertheless, as the rate increases, the constraints imposed by the channel, package, and die become more severe and do not benefit from process scaling in the same fashion that circuit design does. Reflections from impedance discontinuities in the PCB and package caused by vias and connectors introduce significant signal loss and distortions at higher frequencies. Even with an ideal channel, at every package-die interface, there is an intrinsic parasitic capacitance due to the pads and the ESD circuit amounting to at least 150fF, and a 50Ω resistor termination at both the transmit and receive ends resulting in an intrinsic pole at 23GHz or lower. In light of all these limitations, serial NRZ signaling beyond 60Gb/s appears suboptimal in terms of both power and performance. Utilizing various modulation techniques such as PAM4, one can achieve a higher spectral efficiency [2]. To enable such transmission formats, high-speed moderate-resolution data converters are required. This paper describes a 36Gb/s transmitter based on an 18GS/s 8b DAC implemented in 28nm CMOS, compliant to the new IEEE802.3bj standard for 100G Ethernet over backplane and copper cables [3].
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.11
0.1
0.1
0.1
0.033333
0
0
0
0
0
0
0
0
0
Efficient Architecture-Aware Acceleration of BWA-MEM for Multicore Systems Innovations in Next-Generation Sequencing are enabling generation of DNA sequence data at ever faster rates and at very low cost. For example, the Illumina NovaSeq 6000 sequencer can generate 6 Terabases of data in less than two days, sequencing nearly 20 Billion short DNA fragments called reads at the low cost of $1000 per human genome. Large sequencing centers typically employ hundreds of such systems. Such highthroughput and low-cost generation of data underscores the need for commensurate acceleration in downstream computational analysis of the sequencing data. A fundamental step in downstream analysis is mapping of the reads to a long reference DNA sequence, such as a reference human genome. Sequence mapping is a compute-intensive step that accounts for more than 30% of the overall time of the GATK (Genome Analysis ToolKit) best practices workflow. BWA-MEM is one of the most widely used tools for sequence mapping and has tens of thousands of users. In this work, we focus on accelerating BWA-MEM through an efficient architecture aware implementation, while maintaining identical output. The volume of data requires distributed computing and is usually processed on clusters or cloud deployments with multicore processors usually being the platform of choice. Since the application can be easily parallelized across multiple sockets (even across distributed memory systems) by simply distributing the reads equally, we focus on performance improvements on a single socket multicore processor. BWA-MEM run time is dominated by three kernels, collectively responsible for more than 85% of the overall compute time. We improved the performance of the three kernels by 1) using techniques to improve cache reuse, 2) simplifying the algorithms, 3) replacing many small memory allocations with a few large contiguous ones to improve hardware prefetching of data, 4) software prefetching of data, and 5) utilization of SIMD wherever applicable and massive reorganization of the source code to enable these improvements. As a result, we achieved nearly 2x, 183x, and 8x speedups on the three kernels, respectively, resulting in up to 3.5x and 2.4x speedups on end-to-end compute time over the original BWA-MEM on single thread and single socket of Intel Xeon Skylake processor. To the best of our knowledge, this is the highest reported speedup over BWA-MEM (running on a single CPU) while using a single CPU or a single CPU-single GPGPU/FPGA combination.
SWIFOLD: Smith-Waterman implementation on FPGA with OpenCL for long DNA sequences. The results suggest that SWIFOLD can be a serious contender for accelerating the SW alignment of DNA sequences of unrestricted size in an affordable way reaching on average 125 GCUPS and almost a peak of 270 GCUPS.
GSWABE: faster GPU-accelerated sequence alignment with optimal alignment retrieval for short DNA sequences In this paper, we present GSWABE, a graphics processing unit GPU-accelerated pairwise sequence alignment algorithm for a collection of short DNA sequences. This algorithm supports all-to-all pairwise global, semi-global and local alignment, and retrieves optimal alignments on Compute Unified Device Architecture CUDA-enabled GPUs. All of the three alignment types are based on dynamic programming and share almost the same computational pattern. Thus, we have investigated a general tile-based approach to facilitating fast alignment by deeply exploring the powerful compute capability of CUDA-enabled GPUs. The performance of GSWABE has been evaluated on a Kepler-based Tesla K40 GPU using a variety of short DNA sequence datasets. The results show that our algorithm can yield a performance of up to 59.1 billions cell updates per second GCUPS, 58.5 GCUPS and 50.3 GCUPS for global, semi-global and local alignment, respectively. Furthermore, on the same system GSWABE runs up to 156.0 times faster than the Streaming SIMD Extensions SSE-based SSW library and up to 102.4 times faster than the CUDA-based MSA-CUDA the first stage in terms of local alignment. Compared with the CUDA-based gpu-pairAlign, GSWABE demonstrates stable and consistent speedups with a maximum speedup of 11.2, 10.7, and 10.6 for global, semi-global, and local alignment, respectively. Copyright © 2014 John Wiley & Sons, Ltd.
Emerging Trends in Design and Applications of Memory-Based Computing and Content-Addressable Memories Content-addressable memory (CAM) and associative memory (AM) are types of storage structures that allow searching by content as opposed to searching by address. Such memory structures are used in diverse applications ranging from branch prediction in a processor to complex pattern recognition. In this paper, we review the emerging challenges and opportunities in implementing different varieties of...
FPGA Accelerated INDEL Realignment in the Cloud The amount of data being generated in genomics is predicted to be between 2 and 40 exabytes per year for the next decade, making genomic analysis the new frontier and the new challenge for precision medicine. This paper explores targeted deployment of hardware accelerators in the cloud to improve the runtime and throughput of immense-scale genomic data analyses. In particular, INDEL (INsertion/DELetion) realignment is a critical operation that enables diagnostic testings of cancer through error correction prior to variant calling. It is the slowest part of the somatic (cancer) genomic analysis pipeline, the alignment refinement pipeline, and represents roughly one-third of the execution time of time-sensitive diagnostics for acute cancer patients.To accelerate genomic analysis, this paper describes a hardware accelerator for INDEL realignment (IR), and a hardware-software framework leveraging FPGAs-as-a-service in the cloud. We chose to implement genomics analytics on FPGAs because genomic algorithms are still rapidly evolving (e.g. the de facto standard "GATK Best Practices" has had five releases since January of this year). We chose to deploy genomics accelerators in the cloud to reduce capital expenditure and to provide a more quantitative performance and cost analysis. We built and deployed a sea of IR accelerators using our hardware-software accelerator development framework on AWS EC2 F1 instances. We show that our IR accelerator system performed 81x better than multi-threaded genomic analysis software while being 32x more cost efficient.
SeGraM: a universal hardware accelerator for genomic sequence-to-graph and sequence-to-sequence mapping A critical step of genome sequence analysis is the mapping of sequenced DNA fragments (i.e., reads) collected from an individual to a known linear reference genome sequence (i.e., sequence-to-sequence mapping). Recent works replace the linear reference sequence with a graph-based representation of the reference genome, which captures the genetic variations and diversity across many individuals in a population. Mapping reads to the graph-based reference genome (i.e., sequence-to-graph mapping) results in notable quality improvements in genome analysis. Unfortunately, while sequence-to-sequence mapping is well studied with many available tools and accelerators, sequence-to-graph mapping is a more difficult computational problem, with a much smaller number of practical software tools currently available. We analyze two state-of-the-art sequence-to-graph mapping tools and reveal four key issues. We find that there is a pressing need to have a specialized, high-performance, scalable, and low-cost algorithm/hardware co-design that alleviates bottlenecks in both the seeding and alignment steps of sequence-to-graph mapping. Since sequence-to-sequence mapping can be treated as a special case of sequence-to-graph mapping, we aim to design an accelerator that is efficient for both linear and graph-based read mapping. To this end, we propose SeGraM, a universal algorithm/hardware co-designed genomic mapping accelerator that can effectively and efficiently support both <u>se</u>quence-to-<u>gra</u>ph <u>m</u>apping and sequence-to-sequence mapping, for both short and long reads. To our knowledge, SeGraM is the first algorithm/hardware co-design for accelerating sequence-to-graph mapping. SeGraM consists of two main components: (1) MinSeed, the first <u>min</u>imizer-based <u>seed</u>ing accelerator, which finds the candidate locations in a given genome graph; and (2) BitAlign, the first <u>bit</u>vector-based sequence-to-graph <u>align</u>ment accelerator, which performs alignment between a given read and the subgraph identified by MinSeed. We couple SeGraM with high-bandwidth memory to exploit low latency and highly-parallel memory access, which alleviates the memory bottleneck. We demonstrate that SeGraM provides significant improvements for multiple steps of the sequence-to-graph (i.e., S2G) and sequence-to-sequence (i.e., S2S) mapping pipelines. First, SeGraM outperforms state-of-the-art S2G mapping tools by 5.9×/3.9× and 106×/- 742× for long and short reads, respectively, while reducing power consumption by 4.1×/4.4× and 3.0×/3.2×. Second, BitAlign outperforms a state-of-the-art S2G alignment tool by 41×-539× and three S2S alignment accelerators by 1.2×-4.8×. We conclude that SeGraM is a high-performance and low-cost universal genomics mapping accelerator that efficiently supports both sequence-to-graph and sequence-to-sequence mapping pipelines.
An FPGA Implementation of A Portable DNA Sequencing Device Based on RISC-V Miniature and mobile DNA sequencers are steadily growing in popularity as effective tools for genetics research. As basecalling algorithms continue to evolve, basecalling poses a serious challenge for small computing devices despite its increasing accuracy. Although general-purpose computing chips such as CPUs and GPUs can achieve fast results, they are not energy efficient enough for mobile applications. This paper presents an innovative solution, a basecalling hardware architecture based on RISC-V ISA, and after validation with our custom FPGA verification platform, it demonstrates a 1.95x energy efficiency ratio compared to x86. There is also a 38% improvement in energy efficiency ratio compared to ARM. In addition, this study also completes the verification work for subsequent ASIC designs.
Accelerating read mapping with FastHASH. With the introduction of next-generation sequencing (NGS) technologies, we are facing an exponential increase in the amount of genomic sequence data. The success of all medical and genetic applications of next-generation sequencing critically depends on the existence of computational techniques that can process and analyze the enormous amount of sequence data quickly and accurately. Unfortunately, the current read mapping algorithms have difficulties in coping with the massive amounts of data generated by NGS.We propose a new algorithm, FastHASH, which drastically improves the performance of the seed-and-extend type hash table based read mapping algorithms, while maintaining the high sensitivity and comprehensiveness of such methods. FastHASH is a generic algorithm compatible with all seed-and-extend class read mapping algorithms. It introduces two main techniques, namely Adjacency Filtering, and Cheap K-mer Selection.We implemented FastHASH and merged it into the codebase of the popular read mapping program, mrFAST. Depending on the edit distance cutoffs, we observed up to 19-fold speedup while still maintaining 100% sensitivity and high comprehensiveness.
A Linear Representation of Dynamics of Boolean Networks A new matrix product, called semi-tensor product of matrices, is reviewed. Using it, a matrix expression of logic is proposed, where a logical variable is expressed as a vector, a logical function is expressed as a multiple linear mapping. Under this framework, a Boolean network equation is converted into an equivalent algebraic form as a conventional discrete-time linear system. Analyzing the transition matrix of the linear system, formulas are obtained to show a) the number of fixed points; b) the numbers of cycles of different lengths; c) transient period, for all points to enter the set of attractors; and d) basin of each attractor. The corresponding algorithms are developed and used to some examples.
The Transitive Reduction of a Directed Graph
A new concept for wireless reconfigurable receivers In this article we present the Self-Adaptive Universal Receiver (SAUR), a novel wireless reconfigurable receiver architecture. This scheme is based on blind recognition of the system in use, operating on a new radio interface comprising two functional phases. The first phase performs a wideband analysis (WBA) on the received signal to determine its standard. The second phase corresponds to demodulation. Here we only focus on the WBA phase, which consists of an iterative process to find the bandwidth compatible with the associated signal processing techniques. The blind standard recognition performed in the last iteration step of this process uses radial basis function neural networks. This allows a strong analogy between our approach and conventional pattern recognition problems. The efficiency of this type of blind recognition is illustrated with the results of extensive simulations performed in our laboratory using true data of received signals.
Fpga Implementation Of High-Frequency Software Radio Receiver State-of-the-art analog-to-digital converters allow the design of high-frequency software radio receivers that use baseband signal processing. However, such receivers are rarely considered in literature. In this paper, we describe the design of a high-performance receiver operating at high frequencies, whose digital part is entirely implemented in an FPGA device. The design of digital subsystem is given, together with the design of a low-cost analog front end.
A Hybrid Dynamic Load Balancing Algorithm For Distributed Systems Using Genetic Algorithms Dynamic Load Balancing (DLB) is sine qua non in modern distributed systems to ensure the efficient utilization of computing resources therein. This paper proposes a novel framework for hybrid dynamic load balancing. Its framework uses a Genetic Algorithms (GA) based supernode selection approach within. The GA-based approach is useful in choosing optimally loaded nodes as the supernodes directly from data set, thereby essentially improving the speed of load balancing process. Applying the proposed GA-based approach, this work analyzes the performance of hybrid DLB algorithm under different system states such as lightly loaded, moderately loaded, and highly loaded. The performance is measured with respect to three parameters: average response time, average round trip time, and average completion time of the users. Further, it also evaluates the performance of hybrid algorithm utilizing OnLine Transaction Processing (OLTP) benchmark and Sparse Matrix Vector Multiplication (SPMV) benchmark applications to analyze its adaptability to I/O-intensive, memory-intensive, or/and CPU-intensive applications. The experimental results show that the hybrid algorithm significantly improves the performance under different system states and under a wide range of workloads compared to traditional decentralized algorithm.
OMNI: A Framework for Integrating Hardware and Software Optimizations for Sparse CNNs Convolution neural networks (CNNs) as one of today’s main flavor of deep learning techniques dominate in various image recognition tasks. As the model size of modern CNNs continues to grow, neural network compression techniques have been proposed to prune the redundant neurons and synapses. However, prior techniques disconnect the software neural networks compression and hardware acceleration, whi...
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0.05
0
0
0
0
0
0
A 56-Gb/s PAM-4 Receiver Using Time-Based LSB Decoder and S/H Technique for Robustness to Comparator Voltage Variations This article presents a 0.975-pJ/bit 56-Gb/s pulse amplitude modulation-4 (PAM-4) receiver using a time-based least significant bit (LSB) decoder in 28-nm CMOS technology. The proposed time-domain decision technique improves the robustness of comparator voltage variations by separating the data and reference paths. If the reference voltage difference is constant regardless of the common-mode voltage shift, the time-domain decoder achieves a low bit error rate (BER). To improve the timing margin of the LSB decoder from the data-dependent jitter, a sample-and-hold (S/H) structure is adopted in both the data and reference paths. The S/H circuits extend the timing margin by converting the input of the comparators to a constant voltage. The number of comparators for data decoding is reduced to two-thirds, and only eight comparators are required for a quarter-rate structure. The number of comparators in the data path, excluding the reference path, is 4, which reduces the loading capacitance. An adaptive threshold voltage calibration was implemented to generate the timing reference pulse. In addition to bathtub graphs, the BER, according to the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$V_{\text {CM}}$ </tex-math></inline-formula> change of the reference voltages, is plotted to show the sensitivity to the voltage variation.
Current-Mode Triline Transceiver for Coded Differential Signaling Across On-Chip Global Interconnects. This paper presents a current-mode triline ternarylevel coded differential signaling scheme for high-speed data transmission across on-chip global interconnects. An energy efficient current-mode triline transceiver pair suitable for this signaling scheme has been proposed. Compared with a voltage mode receiver with resistive termination, the proposed active terminated current-mode receiver reduces...
A 32-Gb/s PAM-4 Quarter-Rate Clock and Data Recovery Circuit With an Input Slew-Rate Tolerant Selective Transition Detector We present a 32-Gb/s PAM-4 quarter-rate clock and data recovery (CDR) circuit having a newly proposed selective transition detector (STD). The STD allows phase detection of PAM-4 data in a simple manner by eliminating middle transition and majority voting with simple logic gates. In addition, using the edge-rotating technique with quarter-rate CDR operation, our CDR achieves power consumption and chip area reduction. A prototype 32-Gb/s quarter-rate PAM-4 CDR circuit is realized with 28-nm CMOS technology. The CDR circuit consumes 32 mW with 1.2-V supply and the recovered clock signal has 0.0136-UI rms jitter.
30-Gb/s 1.11-pJ/bit Single-Ended PAM-3 Transceiver for High-Speed Memory Links A 30-Gb/s three-level pulse amplitude modulation (PAM-3) transceiver is designed with a one-tap tri-level decision feedback equalizer (DFE) to realize a high-speed dynamic random access memory (DRAM) interface via the 28-nm CMOS process. A 1.5-bit/pin bit efficiency is achieved by encoding and decoding 3-bit data in two unit intervals (UIs). The half-rate PAM-3 transmitter modulates single-ended pseudorandom binary sequence (PRBS) 7/15 data using a low-power encoding logic and an output driver. The receiver achieves a bit error rate (BER) of less than 1E-12 over an 80-mm FR-4 printed circuit board (PCB) channel. At the maximum data rate, the bit efficiency of the transceiver is 1.11 pJ/bit, consuming 33.4 mW. In the receiver, the attenuated PAM-3 data are equalized by a continuous-time linear equalizer (CTLE) and a one-tap tri-level DFE, which has the same complexity as that of non-return-to-zero (NRZ) signaling. The tri-state buffers, which have a floating PMOS switch, convert the output of the comparator into NRZ data, resulting in reduced delay and power dissipation. Four channels of the transceivers operate at data rates of up to 30 $\times $ 4 Gb/s, and the horizontal eye margin of the measured PAM-3 data is achieved at a UI of 0.14 for the PRBS-7 pattern at the maximum data rate.
A Single-Ended Parallel Transceiver With Four-Bit Four-Wire Four-Level Balanced Coding for the Point-to-Point DRAM Interface. A four-bit four-wire four-level (4B4W4L) single-ended parallel transceiver for the point-to-point DRAM interface achieved a peak reduction of ~10 dB in the electromagnetic interference (EMI) H-field power, compared to a conventional 4-bit parallel binary transceiver with the same output driver power of transmitter (TX) and the same input voltage margin of receiver (RX). A four-level balanced codin...
A 0.14-to-0.29-pJ/bit 14-GBaud/s Trimodal (NRZ/PAM-4/PAM-8) Half-Rate Bang-Bang Clock and Data Recovery (BBCDR) Circuit in 28-nm CMOS This paper reports a half-rate bang-bang clock and data recovery (BBCDR) circuit supporting the trimodal (NRZ/PAM-4/PAM-8) operation. The observation of their crossover- points distribution at the transitions introduces the single-loop phase tracking technique. In addition, low-power techniques at both the architecture and circuit levels are employed to greatly improve the overall energy efficiency and multiply data throughput by increasing the number of levels on the magnitude. Fabricated in 28-nm CMOS, our BBCDR prototype scores a 0.29/0.17/0.14 pJ/bit efficiency at 14.4/28.8/43.2 Gb/s under NRZ/PAM-4/PAM-8 modes, respectively. The jitter is <; 0.53 ps (integrated from 100 Hz to 1 GHz) with approximately-equivalent constant loop bandwidth, and we achieve at least 1-UIpp jitter tolerance up to 10 MHz for all the three modes.
Tapestry: a resilient global-scale overlay for service deployment We present Tapestry, a peer-to-peer overlay routing infrastructure offering efficient, scalable, location-independent routing of messages directly to nearby copies of an object or service using only localized resources. Tapestry supports a generic decentralized object location and routing applications programming interface using a self-repairing, soft-state-based routing layer. The paper presents the Tapestry architecture, algorithms, and implementation. It explores the behavior of a Tapestry deployment on PlanetLab, a global testbed of approximately 100 machines. Experimental results show that Tapestry exhibits stable behavior and performance as an overlay, despite the instability of the underlying network layers. Several widely distributed applications have been implemented on Tapestry, illustrating its utility as a deployment infrastructure.
A Low-Power Fast-Transient 90-nm Low-Dropout Regulator With Multiple Small-Gain Stages A power-efficient 90-nm low-dropout regulator (LDO) with multiple small-gain stages is proposed in this paper. The proposed channel-resistance-insensitive small-gain stages provide loop gain enhancements without introducing low-frequency poles before the unity-gain frequency (UGF). As a result, both the loop gain and bandwidth of the LDO are improved, so that the accuracy and response speed of voltage regulation are significantly enhanced. As no on-chip compensation capacitor is required, the active chip area of the LDO is only 72.5 μm × 37.8 μm. Experimental results show that the LDO is capable of providing an output of 0.9 V with maximum output current of 50 mA from a 1-V supply. The LDO has a quiescent current of 9.3 μA, and has significantly improvement in line and load transient responses as well as performance in power-supply rejection ratio (PSRR).
Energy-Efficient Communication Protocol for Wireless Microsensor Networks Wireless distributed micro-sensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks.Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multihop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster base stations (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic net-works, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show that LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional routing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.
Type-2 Fuzzy Sets and Systems: An Overview [corrected reprint] As originally published in the February 2007 issue of IEEE Computational Intelligence Magazine, the above titled paper (ibid., vol. 2, no. 1, pp. 20-29, Feb 07) contained errors in mathematics that were introduced by the publisher. The corrected version is reprinted in its entirety.
Codejail: Application-Transparent Isolation of Libraries with Tight Program Interactions.
Distributed Primal-Dual Subgradient Method for Multiagent Optimization via Consensus Algorithms. This paper studies the problem of optimizing the sum of multiple agents' local convex objective functions, subject to global convex inequality constraints and a convex state constraint set over a network. Through characterizing the primal and dual optimal solutions as the saddle points of the Lagrangian function associated with the problem, we propose a distributed algorithm, named the distributed primal-dual subgradient method, to provide approximate saddle points of the Lagrangian function, based on the distributed average consensus algorithms. Under Slater's condition, we obtain bounds on the convergence properties of the proposed method for a constant step size. Simulation examples are provided to demonstrate the effectiveness of the proposed method.
Towards elastic SDR architectures using dynamic task management. SDR platforms integrating several types and numbers of processing elements in System-on-Chips become an attractive solution for baseband processing in wireless systems. In order to cope with the diversity of protocol applications and the heterogeneity of multi-core architectures, a hierarchical approach for workload distribution is proposed in this paper. Specifically, a system-level scheduler is employed to map applications to multiple processing clusters, complemented with a cluster-level scheduler - the CoreManager - for dynamic resource allocation and configuration as well as for task and data scheduling. A performance analysis of the proposed approach is presented, which shows the advantages of dynamic scheduling against a static approach for variable workloads in the LTE-Advanced uplink multi-user scenarios.
An Energy-Efficient SAR ADC With Event-Triggered Error Correction This brief presents an energy-efficient fully differential 10-bit successive approximation register (SAR) analog-to-digital converter (ADC) with a resolution of 10 bits and a sampling rate of 320 kS/s. The optimal capacitor split and bypass number is analyzed to achieve the highest switching energy efficiency. The common-mode voltage level remains constant during the MSB-capacitor switching cycles. To minimize nonlinearity due to charge averaging voltage offset or DAC array mismatch, an event-triggered error correction method is employed as a redundant cycle for detecting digital code errors within 1 least significant bit (LSB). A test chip was fabricated using the 180-nm CMOS process and occupied a 0.0564-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> core area. Under a regular 0.65-V supply voltage, the ADC achieved an effective number of bits of 9.61 bits and a figure of merit (FOM) of 6.38 fJ/conversion step, with 1.6- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${ \mu }\text{W}$ </tex-math></inline-formula> power dissipation for a low-frequency input. The measured differential and integral nonlinearity results are within 0.30 LSB and 0.43 LSB, respectively.
1.2
0.2
0.2
0.2
0.1
0.05
0
0
0
0
0
0
0
0
MatRaptor: A Sparse-Sparse Matrix Multiplication Accelerator Based on Row-Wise Product Sparse-sparse matrix multiplication (SpGEMM) is a computation kernel widely used in numerous application domains such as data analytics, graph processing, and scientific computing. In this work we propose MatRaptor, a novel SpGEMM accelerator that is high performance and highly resource efficient. Unlike conventional methods using inner or outer product as the meta operation for matrix multiplication, our approach is based on row-wise product, which offers a better tradeoff in terms of data reuse and on-chip memory requirements, and achieves higher performance for large sparse matrices. We further propose a new hardware-friendly sparse storage format, which allows parallel compute engines to access the sparse data in a vectorized and streaming fashion, leading to high utilization of memory bandwidth. We prototype and simulate our accelerator architecture using gem5 on a diverse set of matrices. Our experiments show that MatRaptor achieves 129.2× speedup over single-threaded CPU, 8.8× speedup over GPU and 1.8× speedup over the state-of-the-art SpGEMM accelerator (OuterSPACE). MatRaptor also has 7.2× lower power consumption and 31.3× smaller area compared to OuterSPACE.
Procrustes: a Dataflow and Accelerator for Sparse Deep Neural Network Training The success of DNN pruning has led to the development of energy-efficient inference accelerators that support pruned models with sparse weight and activation tensors. Because the memory layouts and dataflows in these architectures are optimized for the access patterns during inference, however, they do not efficiently support the emerging sparse training techniques. In this paper, we demonstrate (a) that accelerating sparse training requires a co-design approach where algorithms are adapted to suit the constraints of hardware, and (b) that hardware for sparse DNN training must tackle constraints that do not arise in inference accelerators. As proof of concept, we adapt a sparse training algorithm to be amenable to hardware acceleration; we then develop dataflow, data layout, and load-balancing techniques to accelerate it. The resulting system is a sparse DNN training accelerator that produces pruned models with the same accuracy as dense models without first training, then pruning, and finally retraining, a dense model. Compared to training the equivalent unpruned models using a state-of-the-art DNN accelerator without sparse training support, Procrustes consumes up to 3.26× less energy and offers up to 4× speedup across a range of models, while pruning weights by an order of magnitude and maintaining unpruned accuracy.
On the evolution of user interaction in Facebook Online social networks have become extremely popular; numerous sites allow users to interact and share content using social links. Users of these networks often establish hundreds to even thousands of social links with other users. Recently, researchers have suggested examining the activity network - a network that is based on the actual interaction between users, rather than mere friendship - to distinguish between strong and weak links. While initial studies have led to insights on how an activity network is structurally different from the social network itself, a natural and important aspect of the activity network has been disregarded: the fact that over time social links can grow stronger or weaker. In this paper, we study the evolution of activity between users in the Facebook social network to capture this notion. We find that links in the activity network tend to come and go rapidly over time, and the strength of ties exhibits a general decreasing trend of activity as the social network link ages. For example, only 30% of Facebook user pairs interact consistently from one month to the next. Interestingly, we also find that even though the links of the activity network change rapidly over time, many graph-theoretic properties of the activity network remain unchanged.
Coarse grain reconfigurable architecture (embedded tutorial) The paper gives a brief survey over a decade of R&D on coarse grain reconfigurable hardware and related compilation techniques and points out its significance to the emerging discipline of reconfigurable computing.
Cambricon-F: machine learning computers with fractal von neumann architecture Machine learning techniques are pervasive tools for emerging commercial applications and many dedicated machine learning computers on different scales have been deployed in embedded devices, servers, and data centers. Currently, most machine learning computer architectures still focus on optimizing performance and energy efficiency instead of programming productivity. However, with the fast development in silicon technology, programming productivity, including programming itself and software stack development, becomes the vital reason instead of performance and power efficiency that hinders the application of machine learning computers. In this paper, we propose Cambricon-F, which is a series of homogeneous, sequential, multi-layer, layer-similar, machine learning computers with the same ISA. A Cambricon-F machine has a fractal von Neumann architecture to iteratively manage its components: it is with von Neumann architecture and its processing components (sub-nodes) are still Cambricon-F machines with von Neumann architecture and the same ISA. Since different Cambricon-F instances with different scales can share the same software stack on their common ISA, Cambricon-Fs can significantly improve the programming productivity. Moreover, we address four major challenges in Cambricon-F architecture design, which allow Cambricon-F to achieve a high efficiency. We implement two Cambricon-F instances at different scales, i.e., Cambricon-F100 and Cambricon-F1. Compared to GPU based machines (DGX-1 and 1080Ti), Cambricon-F instances achieve 2.82x, 5.14x better performance, 8.37x, 11.39x better efficiency on average, with 74.5%, 93.8% smaller area costs, respectively.
A detailed power model for field-programmable gate arrays Power has become a critical issue for field-programmable gate array (FPGA) vendors. Understanding the power dissipation within FPGAs is the first step in developing power-efficient architectures and computer-aided design (CAD) tools for FPGAs. This article describes a detailed and flexible power model which has been integrated in the widely used Versatile Place and Route (VPR) CAD tool. This power model estimates the dynamic, short-circuit, and leakage power consumed by FPGAs. It is the first flexible power model developed to evaluate architectural tradeoffs and the efficiency of power-aware CAD tools for a variety of FPGA architectures, and is freely available for noncommercial use. The model is flexible, in that it can estimate the power for a wide variety of FPGA architectures, and it is fast, in that it does not require extensive simulation, meaning it can be used to explore a large architectural space. We show how the model can be used to investigate the impact of various architectural parameters on the energy consumed by the FPGA, focusing on the segment length, switch block topology, lookuptable size, and cluster size.
TGPA: tile-grained pipeline architecture for low latency CNN inference FPGAs are more and more widely used as reconfigurable hardware accelerators for applications leveraging convolutional neural networks (CNNs) in recent years. Previous designs normally adopt a uniform accelerator architecture that processes all layers of a given CNN model one after another. This homogeneous design methodology usually has dynamic resource underutilization issue due to the tensor shape diversity of different layers. As a result, designs equipped with heterogeneous accelerators specific for different layers were proposed to resolve this issue. However, existing heterogeneous designs sacrifice latency for throughput by concurrent execution of multiple input images on different accelerators. In this paper, we propose an architecture named Tile-Grained Pipeline Architecture (TGPA) for low latency CNN inference. TGPA adopts a heterogeneous design which supports pipelining execution of multiple tiles within a single input image on multiple heterogeneous accelerators. The accelerators are partitioned onto different FPGA dies to guarantee high frequency. A partition strategy is designd to maximize on-chip resource utilization. Experiment results show that TGPA designs for different CNN models achieve up to 40% performance improvement than homogeneous designs, and 3X latency reduction over state-of-the-art designs.
Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks Eyeriss is an accelerator for state-of-the-art deep convolutional neural networks (CNNs). It optimizes for the energy efficiency of the entire system, including the accelerator chip and off-chip DRAM, for various CNN shapes by reconfiguring the architecture. CNNs are widely used in modern AI systems but also bring challenges on throughput and energy efficiency to the underlying hardware. This is b...
GraphPIM: Enabling Instruction-Level PIM Offloading in Graph Computing Frameworks With the emergence of data science, graph computing has become increasingly important these days. Unfortunately, graph computing typically suffers from poor performance when mapped to modern computing systems because of the overhead of executing atomic operations and inefficient utilization of the memory subsystem. Meanwhile, emerging technologies, such as Hybrid Memory Cube (HMC), enable the processing-in-memory (PIM) functionality with offloading operations at an instruction level. Instruction offloading to the PIM side has considerable potentials to overcome the performance bottleneck of graph computing. Nevertheless, this functionality for graph workloads has not been fully explored, and its applications and shortcomings have not been well identified thus far. In this paper, we present GraphPIM, a full-stack solution for graph computing that achieves higher performance using PIM functionality. We perform an analysis on modern graph workloads to assess the applicability of PIM offloading and present hardware and software mechanisms to efficiently make use of the PIM functionality. Following the real-world HMC 2.0 specification, GraphPIM provides performance benefits for graph applications without any user code modification or ISA changes. In addition, we propose an extension to PIM operations that can further bring performance benefits for more graph applications. The evaluation results show that GraphPIM achieves up to a 2.4× speedup with a 37% reduction in energy consumption.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
Mobility Management Strategies in Heterogeneous Cognitive Radio Networks Considering the capacity gain of the secondary system and the capacity loss of the primary system caused by the newly accessing user, a distributed binary power allocation (admittance criterion) is proposed in dense cognitive networks including plentiful ...
Computing symmetric boolean functions by circuits with few exact threshold gates We consider constant depth circuits augmented with few exact threshold gates with arbitrary weights. We prove strong (up to exponential) size lower bounds for such circuits computing symmetric Boolean functions. Our lower bound is expressed in terms of a natural parameter, the balance, of symmetric functions. Furthermore, in the quasi-polynomial size setting our results provides an exact characterization of the class of symmetric functions in terms of their balance.
The Active Control of Maglev Stationary Self-Excited Vibration With a Virtual Energy Harvester This paper addresses the active control of stationary self-excited vibration, which degrades the stability of the levitation control, decreases the ride comfort, and restricts the construction cost of the maglev system. First, a minimum interaction model containing a flexible bridge and a single levitation unit is presented. Based on the minimum interaction model, the principle underlying the self-excited vibration is explored. It shows that the active property of the levitation system is the root of self-excited vibration. Consider that the energy of vibration may be absorbed by the electromagnetic energy harvester (EEH), so that a technique applying it to the bridge is proposed, and the stability of the combined system is analyzed. However, its hardware structure is complicated, and the cost of construction is prohibitive. Then the novel conception of the virtual EEH is brought forward, which uses the electromagnetic force to emulate the force of a real energy harvester acting on the bridge. With the estimation of the vertical velocity of the bridge and the frequency of vibration, the self-oscillatory is avoided as well by adding an extra control instruction to the electromagnet. After building the overall dynamic model with details, numerical simulations and field experiments are carried out, and the results illustrating the improvement of stability are provided and analyzed.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.102
0.104
0.1
0.1
0.1
0.05
0.033333
0.004566
0.000045
0
0
0
0
0
Design and implementation of Performance Analysis Unit (PAU) for AXI-based multi-core System on Chip (SOC) With the rapid development of semiconductor technology, more complicated systems have been integrated into single chips. However, system performance is not increased in proportion to the gate-count of the system. This is mainly because the optimized design of the system becomes more difficult as the systems become more complicated. Therefore, it is essential to understand the internal behavior of the system and utilize the system resources effectively in the System on Chip (SOC) design. In this paper, we design a Performance Analysis Unit (PAU) for monitoring the AMBA Advanced eXtensible Interface (AXI) bus as a mechanism to investigate the internal and dynamic behavior of an SOC, especially for internal bus activities. A case study with the PAU for an H.264 decoder application is also presented to show how the PAU is utilized in SOC platform. The PAU has the capability to measure major system performance metrics, such as bus latency, amount of bus traffic, contention between master/slave devices, and bus utilization for specific durations. This paper also presents a distributor and synchronization method to connect multiple PAUs to monitor multiple internal buses of large SOC.
Hardware-Assisted Detection of Malicious Software in Embedded Systems One of the critical security threats to computer systems is the execution of malware or malicious software. Several intrusion detection systems have been proposed which perform detection analysis in the software using the audit files generated by the operating system. Software-based solutions to this problem are relatively slow, so these techniques can be used forensically, but not in real-time to stop an exploit before it has an opportunity to do damage. We present a technique to implement intrusion detection for secure embedded systems by detecting behavioral differences between the correct system and the malware. The system is implemented using FPGA logic to enable the detection process to be regularly updated to adapt to new malware and changing system behavior.
Store-and-Forward Buffer Requirements in a Packet Switching Network Previous analytic models for packet switching networks have always assumed infinite storage capacity in store-store-and-forward (S/F) nodes. In this paper, we relax this assumption and present a model for a packet switching network in which each node has a finite pool of S/F buffers. A packet arriving at a node in which all S/F buffers are temporarily filled is discarded. The channel transmission control mechanisms of positive acknowledgment and time-out of packets are included in this model. Individual S/F nodes are analyzed separately as queueing networks with different classes of packets. The single node results are interfaced by imposing a continuity of flow constraint. A heuristic algorithm for determining a balanced assignment of nodal S/F buffer capacities is proposed. Numerical results for the performance of a 19 node network are illustrated.
Safely Preventing Unbounded Delays During Bus Transactions in FPGA-based SoC Advanced eXtensible Interface (AXI) is an open-standard communication bus interface implemented in most commercial off-the-shelf FPGA System-on-Chips (SoC) to exchange data within the chip. Unfortunately, the AXI standard does not mandate any mechanism to detect possible misbehavior of the connected modules. This work shows that this lack of specification has a relevant impact on popular implementations of the AXI bus. In particular, it is shown how it is easily possible to inject arbitrarily-long delays on modern FPGA system-on-chips under the presence of misbehaving bus masters. To safely solve this issue, this paper presents a general timing analysis to bound the execution of periodically-invoked hardware accelerators in nominal conditions. This timing analysis is then used to conFigure a latency-free hardware module named AXI Stall Monitor (ASM), also proposed in this paper, capable of detecting and safely solving possible stalls during AXI bus transactions. The ASM leaves a quantified flexibility to the hardware accelerators when deviating from nominal conditions. The contribution is finally supported by a set of experiments on the Zynq-7000 and Zynq Ultrascale+SoCs by Xilinx.
Is Your Bus Arbiter Really Fair? Restoring Fairness in AXI Interconnects for FPGA SoCs AMBA AXI is a popular bus protocol that is widely adopted as the medium to exchange data in field-programmable gate array system-on-chips (FPGA SoCs). The AXI protocol does not specify how conflicting transactions are arbitrated and hence the design of bus arbiters is left to the vendors that adopt AXI. Typically, a round-robin arbitration is implemented to ensure a fair access to the bus by the master nodes, as for the popular SoCs by Xilinx. This paper addresses a critical issue that can arise when adopting the AXI protocol under round-robin arbitration; specifically, in the presence of bus transactions with heterogeneous burst sizes. First, it is shown that a completely unfair bandwidth distribution can be achieved under some configurations, making possible to arbitrarily decrease the bus bandwidth of a target master node. This issue poses serious performance, safety, and security concerns. Second, a low-latency (one clock cycle) module named AXI burst equalizer (ABE) is proposed to restore fairness. Our investigations and proposals are supported by implementations and tests upon three modern SoCs. Experimental results are reported to confirm the existence of the issue and assess the effectiveness of the ABE with bus traffic generators and hardware accelerators from the Xilinx’s IP library.
Aker: A Design and Verification Framework for Safe and Secure SoC Access Control Modern systems on a chip (SoCs) utilize heterogeneous architectures where multiple IP cores have concurrent access to on-chip shared resources. In security-critical applications, IP cores have different privilege levels for accessing shared resources, which must be regulated by an access control system. Aker is a design and verification framework for SoC access control. Aker builds upon the Access...
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Tensor Decompositions and Applications This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or $N$-way array. Decompositions of higher-order tensors (i.e., $N$-way arrays with $N \geq 3$) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors.
Bayesian Network Classifiers Recent work in supervised learning has shown that a surprisinglysimple Bayesian classifier with strong assumptions of independence amongfeatures, called naive Bayes, is competitive withstate-of-the-art classifiers such as C4.5. This fact raises the question ofwhether a classifier with less restrictive assumptions can perform evenbetter. In this paper we evaluate approaches for inducing classifiers fromdata, based on the theory of learning Bayesian networks. These networks are factored representations ofprobability distributions that generalize the naive Bayesian classifier andexplicitly represent statements about independence. Among these approacheswe single out a method we call Tree Augmented Naive Bayes (TAN), which outperforms naive Bayes, yet at the same timemaintains the computational simplicity (no search involved) and robustnessthat characterize naive Bayes. We experimentally tested these approaches,using problems from the University of California at Irvine repository, and compared them to C4.5, naive Bayes, and wrapper methods for featureselection.
Identifying and Filtering Near-Duplicate Documents The mathematical concept of document resemblance captures well the informal notion of syntactic similarity. The resemblance can be estimated using a fixed size "sketch" for each document. For a large collection of documents (say hundreds of millions) the size of this sketch is of the order of a few hundred bytes per document. However, for effcient large scale web indexing it is not necessary to determine the actual resemblance value: it suffces to determine whether newly encountered documents are duplicates or near-duplicates of documents already indexed. In other words, it suffces to determine whether the resemblance is above a certain threshold. In this talk we show how this determination can be made using a "sample" of less than 50 bytes per document. The basic approach for computing resemblance has two aspects: first, resemblance is expressed as a set (of strings) intersection problem, and second, the relative size of intersections is evaluated by a process of random sampling that can be done independently for each document. The process of estimating the relative size of intersection of sets and the threshold test discussed above can be applied to arbitrary sets, and thus might be of independent interest. The algorithm for filtering near-duplicate documents discussed here has been successfully implemented and has been used for the last three years in the context of the AltaVista search engine.
A 10-Gb/s CMOS clock and data recovery circuit with a half-rate binary phase/frequency detector A 10-Gb/s phase-locked clock and data recovery circuit incorporates a multiphase LC oscillator and a half-rate phase/frequency detector with automatic data retiming. Fabricated in 0.18-μm CMOS technology in an area of 1.75×1.55 mm2, the circuit exhibits a capture range of 1.43 GHz, an rms jitter of 0.8 ps, a peak-to-peak jitter of 9.9 ps, and a bit error rate of 10-9 with a pseudorandom bit sequence of 223-1. The power dissipation excluding the output buffers is 91 mW from a 1.8-V supply.
Distributed Primal-Dual Subgradient Method for Multiagent Optimization via Consensus Algorithms. This paper studies the problem of optimizing the sum of multiple agents' local convex objective functions, subject to global convex inequality constraints and a convex state constraint set over a network. Through characterizing the primal and dual optimal solutions as the saddle points of the Lagrangian function associated with the problem, we propose a distributed algorithm, named the distributed primal-dual subgradient method, to provide approximate saddle points of the Lagrangian function, based on the distributed average consensus algorithms. Under Slater's condition, we obtain bounds on the convergence properties of the proposed method for a constant step size. Simulation examples are provided to demonstrate the effectiveness of the proposed method.
27.9 A 200kS/s 13.5b integrated-fluxgate differential-magnetic-to-digital converter with an oversampling compensation loop for contactless current sensing High voltage applications such as electric motor controllers, solar panel power inverters, electric vehicle battery chargers, uninterrupted and switching mode power supplies benefit from the galvanic isolation of contactless current sensors (CCS) [1]. These include magnetic sensors that sense the magnetic field emanating from a current-carrying conductor. The offset and resolution of Hall-effect sensors is in the μT-level [1-3], in contrast to the μT-level accuracy of integrated-fluxgate (IFG) magnetometers [4]. Previously reported sampled-data closed-loop IFG readouts have limited BWs as their sampling frequencies (4) are limited to be less than or equal to the IFG excitation frequency, fEXC [5-7]. This paper describes a differential closed-loop IFG CCS with fs>fEXC. The differential architecture rejects magnetic stray fields and achieves 750x larger BW than the prior closed-loop IFG readouts [6-7] with 10×better offset than the Hall-effect sensors [1-3].
A Hybrid 1<sup>st</sup>/2<sup>nd</sup>-Order VCO-Based CTDSM With Rail-to-Rail Artifact Tolerance for Bidirectional Neural Interface Bi-directional brain machine interfaces enable simultaneous brain activity monitoring and neural modulation. However stimulation artifact can saturate instrumentation front-end, while concurrent on-site recording is needed. This brief presents a voltage controlled-oscillator (VCO) based continuous-time <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\rm \Delta \Sigma $ </tex-math></inline-formula> modulator (CTDSM) with rail-to-rail input range and fast artifact tracking. A hybrid <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}/2^{nd}$ </tex-math></inline-formula> -order loop is designed to achieve high dynamic range (DR) and large input range. Stimulation artifact is detected by a phase counter and compensated by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}$ </tex-math></inline-formula> -order loop. The residue signal is digitized by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2^{nd}$ </tex-math></inline-formula> -order loop for high precision. Redundancy between the two loops is implemented as feedback capacitors elements with non-binary ratio to guarantee feedback stability and linearity. Fabricated in a 55-nm CMOS process, the prototype achieves 65.7dB SNDR across 10 kHz bandwidth with a full scale of 200 mV <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">pp</sub> . And a ±1.2 V input range is achieved to suppress artifacts. Saline based experiment with simultaneous stimulation and recording demonstrates that the implemented system can track and tolerate rail-to-rail stimulation artifact within 30 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{s}$ </tex-math></inline-formula> while small neural signal can be continuously monitored.
1.11
0.1
0.1
0.1
0.073333
0.033333
0
0
0
0
0
0
0
0
Design and performance evaluation of software defined radio prototype for PHS and IEEE 802.11 wireless LAN A software defined radio (SDR) prototype based on a multiprocessor architecture (MPA) is developed. Software for Japanese personal handy phone system (PHS) of a 2G mobile system, and IEEE 802.11 wireless LAN, which has a much wider bandwidth than the 2G systems, is successfully implemented. A newly developed flexible-rate pre-/post-processor (FR-PPP) achieves the flexibility and wideband performance that the platform needs. This paper shows the design of the SDR prototype and evaluates its performance by experiments that include PHS processor load and processing delay characteristics and wireless LAN processor load and throughput characteristics.
Adc And Dsp Challenges In The Development Of Software Radio Base Stations The need for software-defined radios raises a number of technical challenges, which play a significant role in the development of third-generation personal communications systems (PCS). Some of the most important technical challenges deal with analog-to-digital conversion and digital signal processing technologies, which, in many cases, cannot provide the advanced hardware needed to support the demanding PCS telecommunications services. In this article, we discuss these two technical challenges, focusing primarily on base station radio systems. We first identify the most important requirements for ADC and DSP technologies, and we then extensively discuss and/or propose enabling schemes that could relax these requirements and aid the implementation of software radio base stations. Furthermore, we discuss new structures that can support the development of physically distributed base stations.
A PC-based software receiver using a novel front-end technology Since the software radio concept was introduced, much progress has been made in the past few years in making it a reality. Many software radio based systems have been designed through the development efforts of both commercial and noncommercial organizations. While the term software radio has meant many things, the ultimate goal in software radio has been the realization of an agile radio that can transmit and receive signals at any carrier frequency using any protocol, all of which can be reprogrammed virtually instantaneously. Such a system places great demands on the limits of data converter and processor technologies since it requires real-time disposition of gigasamples of data produced by direct conversion of wireless signals into digital data. From a processing standpoint, the challenge in software radio is to exploit the three basic processor types-fixed architecture processors, FPGAs, and programmable DSPs/RISCs/CISCs-in such a way as to optimize the three-way trade-offs between speed, power dissipation, and programmability. With respect to the latter characteristic, the issues of high-level language interfaces, portability, and reprogramming speed must be considered. This article describes the architecture and operation of a PC-based software radio receiver. The development environment is a real-time PC-based platform that allows testing to be done in a simple manner using the main software functionality of a PC. The front-end of the receiver implemented in hardware represents a novel wideband design (bandwidth of up to 100 MHz centered at a carrier frequency of up to 2 GHz) that functionally converts wireless signals directly into a gigasample digital data stream in the receiver (and vice versa in the transmitter). This direct conversion approach shows the greatest promise in realizing the main goal of software radio
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
Impossibility of distributed consensus with one faulty process The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the “Byzantine Generals” problem.
Threaded code The concept of “threaded code” is presented as an alternative to machine language code. Hardware and software realizations of it are given. In software it is realized as interpretive code not needing an interpreter. Extensions and optimizations are mentioned.
Quick detection of difficult bugs for effective post-silicon validation We present a new technique for systematically creating postsilicon validation tests that quickly detect bugs in processor cores and uncore components (cache controllers, memory controllers, on-chip networks) of multi-core System on Chips (SoCs). Such quick detection is essential because long error detection latency, the time elapsed between the occurrence of an error due to a bug and its manifestation as an observable failure, severely limits the effectiveness of existing post-silicon validation approaches. In addition, we provide a list of realistic bug scenarios abstracted from “difficult” bugs that occurred in commercial multi-core SoCs. Our results for an OpenSPARC T2-like multi-core SoC demonstrate: 1. Error detection latencies of “typical” post-silicon validation tests can be very long, up to billions of clock cycles, especially for bugs in uncore components. 2. Our new technique shortens error detection latencies by several orders of magnitude to only a few hundred cycles for most bug scenarios. 3. Our new technique enables 2-fold increase in bug coverage. An important feature of our technique is its software-only implementation without any hardware modification. Hence, it is readily applicable to existing designs.
Deep Residual Learning for Image Recognition Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers - 8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.
Estimating and sampling graphs with multidimensional random walks Estimating characteristics of large graphs via sampling is a vital part of the study of complex networks. Current sampling methods such as (independent) random vertex and random walks are useful but have drawbacks. Random vertex sampling may require too many resources (time, bandwidth, or money). Random walks, which normally require fewer resources per sample, can suffer from large estimation errors in the presence of disconnected or loosely connected graphs. In this work we propose a new m-dimensional random walk that uses m dependent random walkers. We show that the proposed sampling method, which we call Frontier sampling, exhibits all of the nice sampling properties of a regular random walk. At the same time, our simulations over large real world graphs show that, in the presence of disconnected or loosely connected components, Frontier sampling exhibits lower estimation errors than regular random walks. We also show that Frontier sampling is more suitable than random vertex sampling to sample the tail of the degree distribution of the graph.
Backwards-compatible array bounds checking for C with very low overhead The problem of enforcing correct usage of array and pointer references in C and C++ programs remains unsolved. The approach proposed by Jones and Kelly (extended by Ruwase and Lam) is the only one we know of that does not require significant manual changes to programs, but it has extremely high overheads of 5x-6x and 11x-12x in the two versions. In this paper, we describe a collection of techniques that dramatically reduce the overhead of this approach, by exploiting a fine-grain partitioning of memory called Automatic Pool Allocation. Together, these techniques bring the average overhead checks down to only 12% for a set of benchmarks (but 69% for one case). We show that the memory partitioning is key to bringing down this overhead. We also show that our technique successfully detects all buffer overrun violations in a test suite modeling reported violations in some important real-world programs.
Phoenix: Detecting and Recovering from Permanent Processor Design Bugs with Programmable Hardware Although processor design verification consumes ever-increasing resources, many design defects still slip into production silicon. In a few cases, such bugs have caused expensive chip recalls. To truly improve productivity, hardware bugs should be handled like system software ones, with vendors periodically releasing patches to fix hardware in the field. Based on an analysis of serious design defects in current AMD, Intel, IBM, and Motorola processors, this paper proposes and evaluates Phoenix -- novel field-programmable on-chip hardware that detects and recovers from design defects. Phoenix taps key logic signals and, based on downloaded defect signatures, combines the signals into conditions that flag defects. On defect detection, Phoenix flushes the pipeline and either retries or invokes a customized recovery handler. Phoenix induces negligible slowdown, while adding only 0.05% area and 0.48% wire overheads. Phoenix detects all the serious defects that are triggered by concurrent control signals. Moreover, it recovers from most of them, and simplifies recovery for the rest. Finally, we present an algorithm to automatically size Phoenix for new processors.
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.22
0.031429
0.024444
0
0
0
0
0
0
0
0
0
0
0
IP covert timing channels: design and detection A network covert channel is a mechanism that can be used to leak information across a network in violation of a security policy and in a manner that can be difficult to detect. In this paper, we describe our implementation of a covert network timing channel, discuss the subtle issues that arose in its design, and present performance data for the channel. We then use our implementation as the basis for our experiments in its detection. We show that the regularity of a timing channel can be used to differentiate it from other traffic and present two methods of doing so and measures of their efficiency. We also investigate mechanisms that attackers might use to disrupt the regularity of the timing channel, and demonstrate methods of detection that are effective against them.
TCP/IP Timing Channels: Theory to Implementation Abstract—There has been significant recent interest in covert communication using timing channels. In network timing chan- nels, information is leaked by controlling the time between trans- missions of consecutive packets. Our work focuses on network timing channels and provides two main contributions. The first is to quantify the threat posed by covert network timing channels. The other is to use timing channels to communicate at a low data rate without being detected. In this paper, we design and implement a covert TCP/IP timing channel. We are able to quantify the achievable data rate (or leak rate) of such a covert channel. Moreover, we show that by sacrificing data rate, the traffic patterns of the covert timing channel can be made computationally indistinguishable from that of normal traffic, which makes detecting such communication virtually impossible. We demonstrate the efficacy of our solution by showing significant performance gains in terms of both data rate and covertness over the state-of-the-art.
Energy Efficient Run-Time Incremental Mapping for 3-D Networks-on-Chip 3-D Networks-on-Chip(NoC) emerge as a potent solution to address both the interconnection and design complexity problems facing future Multiprocessor System-on-Chips(MPSoCs).Effective run-time mapping on such 3-D NoC-based MPSoCs can be quite challenging,as the arrival order and task graphs of the target applications are typically not known a priori,which can be further complicated by stringent energy requirements for NoC systems.This paper thus presents an energy-aware run-time incremental mapping algorithm(ERIM) for 3-D NoC which can minimize the energy consumption due to the data communications among processor cores,while reducing the fragmentation effect on the incoming applications to be mapped,and simultaneously satisfying the thermal constraints imposed on each incoming application.Specifically,incoming applications are mapped to cuboid tile regions for lower energy consumption of communication and the minimal routing.Fragment tiles due to system fragmentation can be gleaned for better resource utilization.Extensive experiments have been conducted to evaluate the performance of the proposed algorithm ERIM,and the results are compared against the optimal mapping algorithm(branch-and-bound) and two heuristic algorithms(TB and TL).The experiments show that ERIM outperforms TB and TL methods with significant energy saving(more than 10%),much reduced average response time,and improved system utilization.
Designing Analog Fountain Timing Channels: Undetectability, Robustness, and Model-Adaptation. In existing model-based timing channels, the requirement for the target model to be shared between the sender and the receiver limits the sender’s ability to adapt to changes in the inter-packet delay (IPD) distribution of the application traffic. In this paper, using analog fountain codes (AFCs) with a general model-fitting coding framework, we design timing channel schemes that allow the sender to change the target model without synchronizing with the receiver. We first propose analog fountain timing channels based on symbol transition when the application packet streams have IPD distribution that is shape similar to the distribution of AFC code symbol values. For more general packet streams, we then propose analog fountain timing channels based on symbol split in which the linearly mapped symbols are split using a symbol probability split matrix to mimic the IPD distribution of the application traffic. We use real VoIP and SSH traffic to compare the proposed schemes with model-based timing channels using LT codes and AFC. Experimental results show that both the proposed schemes are model-secure. The robustness of the two schemes is higher than the model-based timing channels using LT codes whereas not as good as those using AFC when the sender and receiver sides are synchronized with respect to the target model. Moreover, when the sender and the receiver are not synchronized with respect to the model, the robustness of the proposed schemes is significantly higher than model-based timing channels.
Efficient Post-Silicon Validation of Network-on-Chip Using Wireless Links Modern complex interconnect systems are augmented with new features to serve the increasing number of on-chip processing elements (PE). To achieve the desired performance, power and reliability in the contemporary designs; Network-on-Chips (NoC) are reinforced with additional hardware and pipeline stages. Wireless hubs are supplemented on top of the baseline wired NoC for efficient intra-chip long distance communications. With the increasing complexity of the network, it is extremely difficult to ensure the functional correctness of the interconnect module at the pre-silicon verification stage. Hence, a robust post-silicon validation mechanism for NoCs has to be devised to guarantee the error-free functioning of the system. This paper exploits the capabilities of the wireless hubs present in wireless NoC (WNoC) to establish a novel post-silicon validation model for communication networks. The proposed method facilitates a better observability of the system in case of transient packet faults like misroute and packet-drop without any additional overhead in term of trace buffer size and trace bandwidth requirement. An overall 30% improvement in fault detection and path reconstruction is observed in comparison to the wired network using this wireless scheme. The wireless transceivers constructively use the existing network to transport the traces till the external debug analyzer, thus eliminating the need of additional trace bus while elevating the speed of trace communication.
Temporal Thermal Covert Channels in Cloud FPGAs. With increasing interest in Cloud FPGAs, such as Amazon's EC2 F1 instances or Microsoft's Azure with Catapult servers, FPGAs in cloud computing infrastructures can become targets for information leakages via convert channel communication. Cloud FPGAs leverage temporal sharing of the FPGA resources between users. This paper shows that heat generated by one user can be observed by another user who later uses the same FPGA. The covert data transfer can be achieved through simple on-off keying (OOK) and use of multiple FPGA boards in parallel significantly improves data throughput. The new temporal thermal covert channel is demonstrated on Microsoft's Catapult servers with FPGAs running remotely in the Texas Advanced Computing Center (TACC). A number of defenses against the new temporal thermal covert channel are presented at the end of the paper.
Practical Mitigations for Timing-Based Side-Channel Attacks on Modern x86 Processors This paper studies and evaluates the extent to which automated compiler techniques can defend against timing-based side-channel attacks on modern x86 processors. We study how modern x86 processors can leak timing information through side-channels that relate to control flow and data flow. To eliminate key-dependent control flow and key-dependent timing behavior related to control flow, we propose the use of if-conversion in a compiler backend, and evaluate a proof-of-concept prototype implementation. Furthermore, we demonstrate two ways in which programs that lack key-dependent control flow and key-dependent cache behavior can still leak timing information on modern x86 implementations such as the Intel Core 2 Duo, and propose defense mechanisms against them.
Cross Processor Cache Attacks. Multi-processor systems are becoming the de-facto standard across different computing domains, ranging from high-end multi-tenant cloud servers to low-power mobile platforms. The denser integration of CPUs creates an opportunity for great economic savings achieved by packing processes of multiple tenants or by bundling all kinds of tasks at various privilege levels to share the same platform. This level of sharing carries with it a serious risk of leaking sensitive information through the shared microarchitectural components. Microarchitectural attacks initially only exploited core-private resources, but were quickly generalized to resources shared within the CPU. We present the first fine grain side channel attack that works across processors. The attack does not require CPU co-location of the attacker and the victim. The novelty of the proposed work is that, for the first time the directory protocol of high efficiency CPU interconnects is targeted. The directory protocol is common to all modern multi-CPU systems. Examples include AMD's HyperTransport, Intel's Quickpath, and ARM's AMBA Coherent Interconnect. The proposed attack does not rely on any specific characteristic of the cache hierarchy, e.g. inclusiveness. Note that inclusiveness was assumed in all earlier works. Furthermore, the viability of the proposed covert channel is demonstrated with two new attacks: by recovering a full AES key in OpenSSL, and a full ElGamal key in libgcrypt within the range of seconds on a shared AMD Opteron server.
A Logic-in-Memory Computer If, as presently projected, the cost of microelectronic arrays in the future will tend to reflect the number of pins on the array rather than the number of gates, the logic-in-memory array is an extremely attractive computer component. Such an array is essentially a microelectronic memory with some combinational logic associated with each storage element. A logic-in-memory computer is described that is organized around a logic-enhanced ``cache'' memory array. Used as a cache, a logic-in-memory array performs as a high-speed buffer between a conventional CPU and a conventional memory. The effect on the computer system of the cache and its control mechanism is to make the main memory appear to have all of the processing capabilities and almost the same performance as the cache. Operations within the array are naturally organized as operations on blocks of data called ``sectors.'' Among the operations that can be performed are arithmetic and logical operations on pairs of elements from two sectors, and a variety of associative search operations on a single sector. For such operations, the main memory of the computer appears to the program to be composed of a collection of logic-in-memory arrays, each the size of a sector. Because of the high-speed, highly parallel sector operations, the logic-in-memory computer points to a new direction for achieving orders of magnitude increase in computer performance. Moreover, since the computer is specifically organized for large-scale integration, the increased performance might be obtained for a comparatively small dollar cost.
Merged Two-Stage Power Converter With Soft Charging Switched-Capacitor Stage in 180 nm CMOS In this paper, we introduce a merged two-stage dc-dc power converter for low-voltage power delivery. By separating the transformation and regulation function of a dc-dc power converter into two stages, both large voltage transformation and high switching frequency can be achieved. We show how the switched-capacitor stage can operate under soft charging conditions by suitable control and integration (merging) of the two stages. This mode of operation enables improved efficiency and/or power density in the switched-capacitor stage. A 5-to-1 V, 0.8 W integrated dc-dc converter has been developed in 180 nm CMOS. The converter achieves a peak efficiency of 81%, with a regulation stage switching frequency of 10 MHz.
Design and analysis of an adaptive transcutaneous power telemetry for biomedical implants Inductively coupled coil pair is the most common way of wirelessly transferring power to medical implants. However, the coil displacements and/or loading changes may induce large fluctuations in transmitted power into the implant if no adaptive control is used. In such cases, it is required to transmit excessive power to accommodate all the working conditions, which substantially reduces the power efficiency and imposes potential safety concerns. We have implemented a power transfer system with adaptive control technique to eliminate the power variations due to the loading or coupling coefficient changes. A maximum of 250mW power is transmitted through an optimized coil pair driven by Class-E power amplifier. Load shift keying is implemented to wirelessly transfer data back from the secondary to primary side over the same coil pair, with data rate of 3.3 kbps and packet error rate less than 10-5. A pseudo pulsewidth modulation has been designed to facilitate back data transmission along with forward power transmission. Through this back telemetry the system transmits the information on received power, back from implant to primary side. According to the data received, the system adjusts the supply voltage of the Class-E power amplifier through a digitally controlled dc-dc converter, thus varying the power sent to the implant. The key system parameters are optimized to ensure the stability of the closed-loop system. Measurements show that the system can transmit the 'just-needed' power for a wide range of coil separation and/or loading conditions, with power efficiency doubled when compared to the uncompensated link.
A Rapid Prototyping Methodology and Platform for Seamless Communication Systems Abstract: The availability of reconfigurable technologies has enabled the construction of flexible systems allowing run-time reconfiguration of system hardware and software functions. "Seamless communications" (also known in radio communications world as "reconfigurable" or "software-defined" radio) is one of the areas where technologies allowing run-time reconfiguration are highly desirable. This paper presents a new rapid prototyping methodology and platform for prototyping generic seamless communication systems. The methodology combines a C-based software design flow targeting host and DSP processors, and a rapid FPGA hardware design flow based on Handel-C- a C-like programming language. The hardware design flow also supports the generation of partial FPGA configurations. A library of parametrised communication modules was developed to facilitate the rapid construction of common communication architectures. A PC-based prototyping platform provides a set of custom hardware interfaces for prototyping systems with radio-frequency (RF), infra-red (IR) and generic wide-bandwidth communication links. The feasibility of the presented methodology was tested on several simple demonstrator applications.
Synthesizing information systems knowledge: A typology of literature reviews. •We proposed a typology of nine review types based on seven core dimensions.•The number of reviews in top-ranked IS journals has increased between 1999 and 2013.•Theoretical and narrative reviews are the most prevalent types in top IS journals.•We found inconsistencies in the labels used by authors to qualify IS reviews.•A majority of IS reviews reported only scholars as their target audience.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.034638
0.031004
0.028571
0.028571
0.028571
0.014286
0.004096
0.000582
0
0
0
0
0
0