Query Text
stringlengths
10
40.4k
Ranking 1
stringlengths
12
40.4k
Ranking 2
stringlengths
12
36.2k
Ranking 3
stringlengths
10
36.2k
Ranking 4
stringlengths
13
40.4k
Ranking 5
stringlengths
12
36.2k
Ranking 6
stringlengths
13
36.2k
Ranking 7
stringlengths
10
40.4k
Ranking 8
stringlengths
12
36.2k
Ranking 9
stringlengths
12
36.2k
Ranking 10
stringlengths
12
36.2k
Ranking 11
stringlengths
20
6.21k
Ranking 12
stringlengths
14
8.24k
Ranking 13
stringlengths
28
4.03k
score_0
float64
1
1.25
score_1
float64
0
0.25
score_2
float64
0
0.25
score_3
float64
0
0.25
score_4
float64
0
0.25
score_5
float64
0
0.25
score_6
float64
0
0.25
score_7
float64
0
0.24
score_8
float64
0
0.2
score_9
float64
0
0.03
score_10
float64
0
0
score_11
float64
0
0
score_12
float64
0
0
score_13
float64
0
0
Plundervolt: Software-based Fault Injection Attacks against Intel SGX Dynamic frequency and voltage scaling features have been introduced to manage ever-growing heat and power consumption in modern processors. Design restrictions ensure frequency and voltage are adjusted as a pair, based on the current load, because for each frequency there is only a certain voltage range where the processor can operate correctly. For this purpose, many processors (including the widespread Intel Core series) expose privileged software interfaces to dynamically regulate processor frequency and operating voltage.In this paper, we demonstrate that these privileged interfaces can be reliably exploited to undermine the system's security. We present the Plundervolt attack, in which a privileged software adversary abuses an undocumented Intel Core voltage scaling interface to corrupt the integrity of Intel SGX enclave computations. Plundervolt carefully controls the processor's supply voltage during an enclave computation, inducing predictable faults within the processor package. Consequently, even Intel SGX's memory encryption/authentication technology cannot protect against Plundervolt. In multiple case studies, we show how the induced faults in enclave computations can be leveraged in real-world attacks to recover keys from cryptographic algorithms (including the AES-NI instruction set extension) or to induce memory safety vulnerabilities into bug-free enclave code. We finally discuss why mitigating Plundervolt is not trivial, requiring trusted computing base recovery through microcode updates or hardware changes.
WHISK: an uncore architecture for dynamic information flow tracking in heterogeneous embedded SoCs In this paper, we describe for the first time, how Dynamic Information Flow Tracking (DIFT) can be implemented for heterogeneous designs that contain one or more on-chip accelerators attached to a network-on-chip. We observe that implementing DIFT for such systems requires holistic platform level view, i.e., designing individual components in the heterogeneous system to be capable of supporting DIFT is necessary but not sufficient to correctly implement full-system DIFT. Based on this observation we present a new system architecture for implementing DIFT, and also describe wrappers that provide DIFT functionality for third-party IP components. Results show that our implementation minimally impacts performance of programs that do not utilize DIFT, and the price of security is constant for modest amounts of tagging and then sub-linearly increases with the amount of tagging.
Thermal monitoring mechanisms for chip multiprocessors With large-scale integration and increasing power densities, thermal management has become an important tool to maintain performance and reliability in modern process technologies. In the core of dynamic thermal management schemes lies accurate reading of on-die temperatures. Therefore, careful planning and embedding of thermal monitoring mechanisms into high-performance systems becomes crucial. In this paper, we propose three techniques to create sensor infrastructures for monitoring the maximum temperature on a multicore system. Initially, we extend a nonuniform sensor placement methodology proposed in the literature to handle chip multiprocessors (CMPs) and show its limitations. We then analyze a grid-based approach where the sensors are placed on a static grid covering each core and show that the sensor readings can differ from the actual maximum core temperature by as much as 12.6°C when using 16 sensors per core. Also, as large as 10.6% of the thermal emergencies are not captured using the same number of sensors. Based on this observation, we first develop an interpolation scheme, which estimates the maximum core temperature through interpolation of the readings collected at the static grid points. We show that the interpolation scheme improves the measurement accuracy and emergency coverage compared to grid-based placement when using the same number of sensors. Second, we present a dynamic scheme where only a subset of the sensor readings is collected to predict the maximum temperature of each core. Our results indicate that, we can reduce the number of active sensors by as much as 50%, while maintaining similar measurement accuracy and emergency coverage compared to the case where the entire sensor set on the grid is sampled at all times.
SHIELD: a software hardware design methodology for security and reliability of MPSoCs Security of MPSoCs is an emerging area of concern in embedded systems. Security is jeopardized by code injection attacks, which are the most common types of software attacks. Previous attempts to detect code injection in MPSoCs have been burdened with significant performance overheads. In this work, we present a hardware/software methodology "SHIELD" to detect code injection attacks in MPSoCs. SHIELD instruments the software programs running on application processors in the MPSoC and also extracts control flow and basic block execution time information for runtime checking. We employ a dedicated security processor (monitor processor) to supervise the application processors on the MPSoC. Custom hardware is designed and used in the monitor and application processors. The monitor processor uses the custom hardware to rapidly analyze information communicated to it from the application processors at runtime. We have implemented SHIELD on a commercial extensible processor (Xtensa LX2) and tested it on a multiprocessor JPEG encoder program. In addition to code injection attacks, the system is also able to detect 83% of bit flips errors in the control flow instructions. The experiments show that SHIELD produces systems with runtime which is at least 9 times faster than the previous solution. SHIELD incurs a runtime (clock cycles) performance overhead of only 6.6% and an area overhead of 26.9%, when compared to a non-secure system.
Towards decentralized system-level security for MPSoC-based embedded applications. With the increasing connectivity and complexity of embedded systems, security issues have become a key consideration in design. In this paper, we propose a decentralized system-level approach for isolating application tasks without the need to rely on a centralized privileged authority at run-time. We discuss the need for isolation to reduce the potential impact of a task compromise or untrustworthy IP block, and present mechanisms to allow for safe sharing of memory regions and IP blocks between tasks in the system. After exploring the architectural requirements for enforcing our security model we present a hardware Isolation Unit, which can be customized for different types of dynamic permission changes depending on task-resource relationships and added to heterogeneous MPSoCs to enforce our security approach.
Dedicated Security Chips in the Age of Secure Enclaves Secure enclave architectures have become prevalent in modern CPUs. Enclaves provide a flexible way to implement various hardware-assisted security services. But special-purpose security chips can still have advantages. Interestingly, dedicated security chips can also assist enclaves and improve their security.
Detecting Hardware Covert Timing Channels. Information security and data privacy have steadily grown into major concerns in computing, especially given the rapid transition into the digital age for all needs--from healthcare to national defense. Among the many forms of information leakage, covert timing channels can be dangerous primarily because they involve two parties intentionally colluding to exfiltrate sensitive data by subverting the underlying system security policy. The attackers establish an illegitimate communication channel between two processes and transmit information via resource timing modulation, which does not leave any physical activity trace for later forensic analysis. Recent studies have shown the vulnerability of many popular computing environments, such as cloud computing, to these covert timing channels. With the advancements in software confinement mechanisms, shared processor hardware structures will be natural targets for malicious attackers to exploit and implement their covert-timing-based channels. In this work, the authors present a microarchitecture-level framework that detects the possible presence of covert timing channels on shared hardware. Their experiments demonstrate their ability to successfully detect different types of covert timing channels on various hardware structures and communication patterns.
Directed diffusion: a scalable and robust communication paradigm for sensor networks Advances in processor, memory and radio technology will enable small and cheap nodes capable of sensing, communication and computation. Networks of such nodes can coordinate to perform distributed sensing of environmental phenomena. In this paper, we explore the directed diffusion paradigm for such coordination. Directed diffusion is datacentric in that all communication is for named data. All nodes in a directed diffusion-based network are application-aware. This enables diffusion to achieve energy savings by selecting empirically good paths and by caching and processing data in-network. We explore and evaluate the use of directed diffusion for a simple remote-surveillance sensor network.
Trellis-coded modulation with bit interleaving and iterative decoding This paper considers bit-interleaved coded modulation (BICM) for bandwidth-efficient transmission using software radios. A simple iterative decoding (ID) method with hard-decision feedback is suggested to achieve better performance. The paper shows that convolutional codes with good Hamming-distance property can provide both high diversity order and large free Euclidean distance for BICM-ID. The method offers a common framework for coded modulation over channels with a variety of fading statistics. In addition, BICM-ID allows an efficient combination of punctured convolutional codes and multiphase/level modulation, and therefore provides a simple mechanism for variable-rate transmission
Hey, you, get off of my cloud: exploring information leakage in third-party compute clouds Third-party cloud computing represents the promise of outsourcing as applied to computation. Services, such as Microsoft's Azure and Amazon's EC2, allow users to instantiate virtual machines (VMs) on demand and thus purchase precisely the capacity they require when they require it. In turn, the use of virtualization allows third-party cloud providers to maximize the utilization of their sunk capital costs by multiplexing many customer VMs across a shared physical infrastructure. However, in this paper, we show that this approach can also introduce new vulnerabilities. Using the Amazon EC2 service as a case study, we show that it is possible to map the internal cloud infrastructure, identify where a particular target VM is likely to reside, and then instantiate new VMs until one is placed co-resident with the target. We explore how such placement can then be used to mount cross-VM side-channel attacks to extract information from a target VM on the same machine.
The Quadrature LC Oscillator: A Complete Portrait Based on Injection Locking We show that the quadrature LC oscillator is best treated as two strongly coupled, nominally identical oscillators that are locked to the same frequency. Differential equations that extend Adler's description of locking to strong injection reveal the full dynamics of this circuit. With a simplifying insight, the analysis reveals all the modes of the oscillator, their stability, the effects of mism...
Extermal cover times for random walks on trees
PuDianNao: A Polyvalent Machine Learning Accelerator Machine Learning (ML) techniques are pervasive tools in various emerging commercial applications, but have to be accommodated by powerful computer systems to process very large data. Although general-purpose CPUs and GPUs have provided straightforward solutions, their energy-efficiencies are limited due to their excessive supports for flexibility. Hardware accelerators may achieve better energy-efficiencies, but each accelerator often accommodates only a single ML technique (family). According to the famous No-Free-Lunch theorem in the ML domain, however, an ML technique performs well on a dataset may perform poorly on another dataset, which implies that such accelerator may sometimes lead to poor learning accuracy. Even if regardless of the learning accuracy, such accelerator can still become inapplicable simply because the concrete ML task is altered, or the user chooses another ML technique. In this study, we present an ML accelerator called PuDianNao, which accommodates seven representative ML techniques, including k-means, k-nearest neighbors, naive bayes, support vector machine, linear regression, classification tree, and deep neural network. Benefited from our thorough analysis on computational primitives and locality properties of different ML techniques, PuDianNao can perform up to 1056 GOP/s (e.g., additions and multiplications) in an area of 3.51 mm^2, and consumes 596 mW only. Compared with the NVIDIA K20M GPU (28nm process), PuDianNao (65nm process) is 1.20x faster, and can reduce the energy by 128.41x.
A 12-Bit Dynamic Tracking Algorithm-Based SAR ADC With Real-Time QRS Detection A 12-bit successive approximation register (SAR) ADC based on dynamic tracking algorithm and a real-time QRS-detection algorithm are proposed. The dynamic tracking algorithm features two tracking windows which are adjacent to prediction interval. This algorithm is able to track down the input signal's variation range and automatically adjust the subrange interval and update prediction code. QRS-complex detection algorithm integrates synchronous time sequential ADC and realtime QRS-detector. The chip is fabricated in a standard 0.13 μm CMOS process with a 0.6 V supply. Measurement results show that proposed ADC exhibits 10.72 effective number of bit (ENOB) and 79.63 dB spur-free dynamic range (SFDR) at 10k Hz sample rate given 41.5 Hz sinusoid input. The DNL and INL are bounded at -0.6/0.62 LSB and -0.67/1.43 LSBs. The ADC achieves FoM of 48 fJ/conversion-step at the best case. Also, the prototype is experimented with ECG signal input and extracts the heart beat signal successfully.
1.1
0.1
0.1
0.1
0.1
0.1
0.033333
0
0
0
0
0
0
0
An integrated fluxgate magnetometer for use in closed-loop/open-loop isolated current sensing. This paper presents two integrated magnetic sensor ICs for isolated current sensing. Both employ an integrated fluxgate magnetometer with a sensitivity of 250V/T and a 500ksps readout circuit. Only 5.4mW is required to excite the sensor, which is 20x more power efficient than the state-of-theart. With an external magnetic core, the resulting closed-loop current sensor IC achieves a dynamic range of 112dB and a non linearity below 0.03%, while the open-loop current sensor IC has a dynamic range of 100(1B and a non-linearity below 0.2%.
27.9 A 200kS/s 13.5b integrated-fluxgate differential-magnetic-to-digital converter with an oversampling compensation loop for contactless current sensing High voltage applications such as electric motor controllers, solar panel power inverters, electric vehicle battery chargers, uninterrupted and switching mode power supplies benefit from the galvanic isolation of contactless current sensors (CCS) [1]. These include magnetic sensors that sense the magnetic field emanating from a current-carrying conductor. The offset and resolution of Hall-effect sensors is in the μT-level [1-3], in contrast to the μT-level accuracy of integrated-fluxgate (IFG) magnetometers [4]. Previously reported sampled-data closed-loop IFG readouts have limited BWs as their sampling frequencies (4) are limited to be less than or equal to the IFG excitation frequency, fEXC [5-7]. This paper describes a differential closed-loop IFG CCS with fs>fEXC. The differential architecture rejects magnetic stray fields and achieves 750x larger BW than the prior closed-loop IFG readouts [6-7] with 10×better offset than the Hall-effect sensors [1-3].
A continuous-time ripple reduction technique for spinning-current Hall sensors The intrinsic offset of Hall sensors can be reduced with the help of the spinning-current technique, which modulates this offset away from the signal band. The resulting offset ripple can then be removed by a low-pass filter, which, however, limits the sensor's bandwidth. This paper presents a ripple-reduction technique that does not require a low-pass filter. Measurements on a Hall sensor system implemented in a 0.18μm CMOS process show that the technique can reduce the residual ripple by at least 40dB - to the same level as the sensor's noise.
Highly sensitive Hall magnetic sensor microsystem in CMOS technology A highly sensitive magnetic sensor microsystem based on a Hall device is presented. This microsystem consists of a Hall device improved by an integrated magnetic concentrator and new circuit architecture for the signal processing. It provides an amplification of the sensor signal with a resolution better than 30 /spl mu/V and a periodic offset cancellation while the output of the microsystem is av...
An Adaptive Resolution Asynchronous ADC Architecture for Data Compression in Energy Constrained Sensing Applications An adaptive resolution (AR) asynchronous analog-to-digital converter (ADC) architecture is presented. Data compression is achieved by the inherent signal dependent sampling rate of the asynchronous architecture. An AR algorithm automatically varies the ADC quantizer resolution based on the rate of change of the input. This overcomes the trade-off between dynamic range and input bandwidth typically seen in asynchronous ADCs. A prototype ADC fabricated in a 0.18 μm CMOS technology, and utilizing the subthreshold region of operation, achieves an equivalent maximum sampling rate of 50 kS/s, an SNDR of 43.2 dB, and consumes 25 μW from a 0.7 V supply. The ADC is also shown to provide data compression for accelerometer applications as a proof of concept demonstration.
Impossibility of distributed consensus with one faulty process The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the “Byzantine Generals” problem.
Ad-hoc On-Demand Distance Vector Routing This paper describes work carried out as part of the GUIDE project at Lancaster University. The overall aim of the project is to develop a context-sensitive tourist guide for visitors to the city of Lancaster. Visitors are equipped with portable GUIDE ...
A Linear Representation of Dynamics of Boolean Networks A new matrix product, called semi-tensor product of matrices, is reviewed. Using it, a matrix expression of logic is proposed, where a logical variable is expressed as a vector, a logical function is expressed as a multiple linear mapping. Under this framework, a Boolean network equation is converted into an equivalent algebraic form as a conventional discrete-time linear system. Analyzing the transition matrix of the linear system, formulas are obtained to show a) the number of fixed points; b) the numbers of cycles of different lengths; c) transient period, for all points to enter the set of attractors; and d) basin of each attractor. The corresponding algorithms are developed and used to some examples.
The geometry of innocent flesh on the bone: return-into-libc without function calls (on the x86) We present new techniques that allow a return-into-libc attack to be mounted on x86 executables that calls no functions at all. Our attack combines a large number of short instruction sequences to build gadgets that allow arbitrary computation. We show how to discover such instruction sequences by means of static analysis. We make use, in an essential way, of the properties of the x86 instruction set.
A world survey of artificial brain projects, Part I: Large-scale brain simulations Driven by rapid ongoing advances in computer hardware, neuroscience and computer science, Artificial Brain research and development are blossoming. This article constitutes the first half of a two-part world survey of artificial brain projects: this part dealing with large-scale brain simulations, and the second part with biologically inspired cognitive architectures (BICAs). The large-scale brain simulations we consider in depth here include those by Markram, Modha, Boahen, Horwitz, Edelman, Izhikevich, and Just. As well as reviewing the particulars of these simulation projects, we position them in a broader perspective, comparing at the different underlying definitions of the concept of ''simulation,'' noting that in many ways the projects are modeling neurosystems at different levels as well as using different methodologies.
MicroGP—An Evolutionary Assembly Program Generator This paper describes 驴GP, an evolutionary approach for generating assembly programs tuned for a specific microprocessor. The approach is based on three clearly separated blocks: an evolutionary core, an instruction library and an external evaluator. The evolutionary core conducts adaptive population-based search. The instruction library is used to map individuals to valid assembly language programs. The external evaluator simulates the assembly program, providing the necessary feedback to the evolutionary core. 驴GP has some distinctive features that allow its use in specific contexts. This paper focuses on one such context: test program generation for design validation of microprocessors. Reported results show 驴GP being used to validate a complex 5-stage pipelined microprocessor. Its induced test programs outperform an exhaustive functional test and an instruction randomizer, showing that engineers are able to automatically obtain high-quality test programs.
Digital signal processors in cellular radio communications Contemporary wireless communications are based on digital communications technologies. The recent commercial success of mobile cellular communications has been enabled in part by successful designs of digital signal processors with appropriate on-chip memories and specialized accelerators for digital transceiver operations. This article provides an overview of fixed point digital signal processors and ways in which they are used in cellular communications. Directions for future wireless-focused DSP technology developments are discussed
A 0.5-V 2.5-GHz high-gain low-power regenerative amplifier based on Colpitts oscillator topology in 65-nm CMOS This paper proposes the regenerative amplifier based on the Colpitts oscillator topology. The positive feedback amount was optimized analytically in the circuit design. The proposed regenerative amplifier was fabricated in 65 nm CMOS technology. The measurement results showed 28.7 dB gain and 6.4 dB noise figure at 2.55 GHz while consuming 120 μW under the 0.5-V power supply.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.116
0.077333
0.041333
0.0155
0.001333
0
0
0
0
0
0
0
0
0
Approximate counting, uniform generation and rapidly mixing Markov chains The paper studies effective approximate solutions to combinatorial counting and unform generation problems. Using a technique based on the simulation of ergodic Markov chains, it is shown that, for self-reducible structures, almost uniform generation is possible in polynomial time provided only that randomised approximate counting to within some arbitrary polynomial factor is possible in polynomial time. It follows that, for self-reducible structures, polynomial time randomised algorithms for counting to within factors of the form (1 + n − β ) are available either for all β ϵ R or for no β ϵ R . A substantial part of the paper is devoted to investigating the rate of convergence of finite ergodic Markov chains, and a simple but powerful characterisation of rapid convergence for a broad class of chains based on a structural property of the underlying graph is established. Finally, the general techniques of the paper are used to derive an almost uniform generation procedure for labelled graphs with a given degree sequence which is valid over a much wider range of degrees than previous methods: this in turn leads to randomised approximate counting algorithms for these graphs with very good asymptotic behaviour.
A survey on routing protocols for wireless sensor networks Recent advances in wireless sensor networks have led to many new protocols specifically designed for sensor networks where energy awareness is an essential consideration. Most of the attention, however, has been given to the routing protocols since they might differ depending on the application and network architecture. This paper surveys recent routing protocols for sensor networks and presents a classification for the various approaches pursued. The three main categories explored in this paper are data-centric, hierarchical and location-based. Each routing protocol is described and discussed under the appropriate category. Moreover, protocols using contemporary methodologies such as network flow and quality of service modeling are also discussed. The paper concludes with open research issues.
Analysis of Distributed Random Grouping for Aggregate Computation on Wireless Sensor Networks with Randomly Changing Graphs Dynamical connection graph changes are inherent in networks such as peer-to-peer networks, wireless ad hoc networks, and wireless sensor networks. Considering the influence of the frequent graph changes is thus essential for precisely assessing the performance of applications and algorithms on such networks. In this paper, using stochastic hybrid systems (SHSs), we model the dynamics and analyze the performance of an epidemic-like algorithm, distributed random grouping (DRG), for average aggregate computation on a wireless sensor network with dynamical graph changes. Particularly, we derive the convergence criteria and the upper bounds on the running time of the DRG algorithm for a set of graphs that are individually disconnected but jointly connected in time. An effective technique for the computation of a key parameter in the derived bounds is also developed. Numerical results and an application extended from our analytical results to control the graph sequences are presented to exemplify our analysis.
Brief announcement: locality-based aggregate computation in wireless sensor networks We present DRR-gossip, an energy-efficient and robust aggregate computation algorithm in sensor networks. We prove that the DRR-gossip algorithm requires O(n) messages and O(n3/2/log1/2 n) one-hop wireless transmissions to obtain aggregates on a random geometric graph. This reduces the energy consumption by at least a factor of 1/log n over the standard uniform gossip algorithm. Experiments validate the theoretical results and show that DRR-gossip needs much less transmissions than other gossip-based schemes.
Initializing sensor networks of non-uniform density in the weak sensor model Assumptions about node density in the Sensor Networks literature are frequently too strong or too weak. Neither absolutely arbitrary nor uniform deployment seem feasible in most of the intended applications of sensor nodes. We present a Weak Sensor Model-compatible distributed protocol for hop-optimal network initialization, under the assumption that the maximum density of nodes is some value Δ known by all of the nodes. In order to prove lower bounds, we observe that all nodes must communicate with some other node in order to join the network, and we call the problem of achieving such a communication the Group Therapy Problem. We show lower bounds for the Group Therapy Problem in Radio Networks of maximum density Δ, regardless of the use of randomization, and a stronger lower bound for the important class of randomized fair protocols. We also show that even when nodes are distributed uniformly, the same lower bound holds, even in expectation and even for the simpler problem of Clear Transmission.
Gossip-based aggregation in large dynamic networks As computer networks increase in size, become more heterogeneous and span greater geographic distances, applications must be designed to cope with the very large scale, poor reliability, and often, with the extreme dynamism of the underlying network. Aggregation is a key functional building block for such applications: it refers to a set of functions that provide components of a distributed system access to global information including network size, average load, average uptime, location and description of hotspots, and so on. Local access to global information is often very useful, if not indispensable for building applications that are robust and adaptive. For example, in an industrial control application, some aggregate value reaching a threshold may trigger the execution of certain actions; a distributed storage system will want to know the total available free space; load-balancing protocols may benefit from knowing the target average load so as to minimize the load they transfer. We propose a gossip-based protocol for computing aggregate values over network components in a fully decentralized fashion. The class of aggregate functions we can compute is very broad and includes many useful special cases such as counting, averages, sums, products, and extremal values. The protocol is suitable for extremely large and highly dynamic systems due to its proactive structure---all nodes receive the aggregate value continuously, thus being able to track any changes in the system. The protocol is also extremely lightweight, making it suitable for many distributed applications including peer-to-peer and grid computing systems. We demonstrate the efficiency and robustness of our gossip-based protocol both theoretically and experimentally under a variety of scenarios including node and communication failures.
Non Trivial Computations in Anonymous Dynamic Networks. In this paper we consider a static set of anonymous processes, i.e., they do not have distinguished IDs, that communicate with neighbors using a local broadcast primitive. The communication graph changes at each computational round with the restriction of being always connected, i.e., the network topology guarantees 1-interval connectivity. In such setting non trivial computations, i.e., answering to a predicate like there exists at least one process with initial input a?, are impossible. In a recent work, it has been conjectured that the impossibility holds even if a distinguished leader process is available within the computation. In this paper we prove that the conjecture is false. We show this result by implementing a deterministic leader-based terminating counting algorithm. In order to build our counting algorithm we first develop a counting technique that is time optimal on a family of dynamic graphs where each process has a fixed distance h from the leader and such distance does not change along rounds. Using this technique we build an algorithm that counts in anonymous 1-interval connected networks.
Comparison of initial conditions for distributed algorithms on anonymous networks This paper studies the "usefulness" of initial conditions for distributed algorithms on anonymous networks. In the literature, several initial conditions such as making one vertex a leader, giving the number of vertex to each vertices, and so on, have been considered. In this paper, we study a relation between the initial condition by considering transformation algorithm from one initial condition to another. For such transformation algorithms, we consider in this paper, both deterministic and randomized distributed algorithms. For each deterministic and randomized transformation type, we show that the relation induces an infinite lattice structure among equivalence classes of initial conditions.
A Clustering Scheme For Hierarchical Control In Mufti-Hop Wireless Networks In this paper we present a clustering scheme to create a hierarchical control structure for mufti-hop wireless networks. A cluster is defined as a subset of vertices, whose induced graph is connected. In addition, a cluster is required to obey certain constraints that are useful for management and scalability of the hierarchy. All these constraints cannot be met simultaneously for general graphs, but we show how such a clustering can be obtained for wireless network topologies. Finally, we present an efficient distributed implementation of our clustering algorithm for a set of wireless nodes to create the set of desired clusters.
Wireless Communications Transmitter Performance Enhancement Using Advanced Signal Processing Algorithms Running in a Hybrid DSP/FPGA Platform This paper deals with digital base band signal processing algorithms, which are seen as enabling technologies for software-enabled radios, that are intended for the correction of the analog front end. In particular, this paper focuses on the design, optimization and testability of predistortion functions suitable for the linearization of narrowband and wideband transmitters developed with a hybrid DSP/FPGA platform. To select the best algorithm for the identification of the predistortion function, singular value decomposition, recursive least squares (RLS), and QR-RLS algorithms are implemented on the same digital signal processor; and, the computation complexity, time, accuracy and the required resources are studied. The hardware implementation of the predistortion function is then carefully performed, in order to meet the real time execution requirements.
Design of a Pressure Control System With Dead Band and Time Delay This paper investigates the control of pressure in a hydraulic circuit containing a dead band and a time varying delay. The dead band is considered as a linear term and a perturbation. A sliding mode controller is designed. Stability conditions are established by making use of Lyapunov Krasovskii functionals, non-perfect time delay estimation is studied and a condition for the effect of uncertainties on the dead zone on stability is derived. Also the effect of different LMI formulations on conservativeness is studied. The control law is tested in practice.
Fully Integrated CMOS Power Amplifier With Efficiency Enhancement at Power Back-Off This paper presents a new approach for power amplifier design using deep submicron CMOS technologies. A transformer based voltage combiner is proposed to combine power generated from several low-voltage CMOS amplifiers. Unlike other voltage combining transformers, the architecture presented in this paper provides greater flexibility to access and control the individual amplifiers in a voltage comb...
Optimum insertion/deletion point selection for fractional sample rate conversion In this paper, an optimum insertion/deletion point selection algorithm for fractional sample rate conversion (SRC) is proposed. The direct insertion/deletion technique achieves low complexity and low power consumption as compared to the other fractional SRC methods. Using a multiple set insertion/deletion technique is efficient for reduction of distortion caused by the insertion/deletion step. When the conversion factor is (N ±¿)/N, the number of possible patterns of insertion/deletion points and the number of combinations for multiple set inserters/deleters grow as ¿ increases. The proposed algorithm minimizes the distortion due to SRC by selecting the patterns and the combinations.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.048628
0.061582
0.054181
0.054181
0.035805
0.023955
0.01
0.00107
0
0
0
0
0
0
A comprehensive survey of industry practice in real-time systems This paper presents results and observations from a survey of 120 industry practitioners in the field of real-time embedded systems. The survey provides insights into the characteristics of the systems being developed today and identifies important trends for the future. It extends the results from the survey data to the broader population that it is representative of, and discusses significant differences between application domains. The survey aims to inform both academics and practitioners, helping to avoid divergence between industry practice and academic research. The value of this research is highlighted by a study showing that the aggregate findings of the survey are not common knowledge in the real-time systems community.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Second-Order Continuous-Time Algorithms for Economic Power Dispatch in Smart Grids. This paper proposes two second-order continuous-time algorithms to solve the economic power dispatch problem in smart grids. The collective aim is to minimize a sum of generation cost function subject to the power demand and individual generator constraints. First, in the framework of nonsmooth analysis and algebraic graph theory, one distributed second-order algorithm is developed and guaranteed ...
A Distributed and Scalable Processing Method Based Upon ADMM. The alternating direction multiplier method (ADMM) was originally devised as an iterative method for solving convex minimization problems by means of parallelization, and was recently used for distributed processing. This letter proposes a modification of state-of-the-art ADMM formulations in order to obtain a scalable version, well suited for a wide range of applications such as cooperative local...
Distributed Random Convex Programming via Constraints Consensus. This paper discusses distributed approaches for the solution of random convex programs (RCPs). RCPs are convex optimization problems with a (usually large) number N of randomly extracted constraints; they arise in several application areas, especially in the context of decision-making under uncertainty; see [G. C. Calafiore, SIAM J. Optim., 20 (2010), pp. 3427-3464; G. C. Calafiore and M. C. Campi, IEEE Trans. Automat. Control, 51 (2006), pp. 742-753]. We here consider a setup in which instances of the random constraints (the scenario) are not held by a single centralized processing unit, but are instead distributed among different nodes of a network. Each node "sees" only a small subset of the constraints, and may communicate with neighbors. The objective is to make all nodes converge to the same solution as the centralized RCP problem. To this end, we develop two distributed algorithms that are variants of the constraints consensus algorithm [G. Notarstefano and F. Bullo, Proceedings of the 46th IEEE Conference on Decision and Control, New Orleans, LA, 2007, pp. 927-932; G. Notarstefano and F. Bullo, IEEE Trans. Automat. Control, 56 (2011), pp. 2247-2261]: the active constraints consensus algorithm, and the vertex constraints consensus (VCC) algorithm. We show that the active constraints consensus algorithm computes the overall optimal solution in finite time, and with almost surely bounded communication at each iteration of the algorithm. The VCC algorithm is instead tailored for the special case in which the constraint functions are convex also with respect to the uncertain parameters, and it computes the solution in a number of iterations bounded by the diameter of the communication graph. We further devise a variant of the VCC algorithm, namely quantized vertex constraints consensus (qVCC), to cope with the case in which communication bandwidth among processors is bounded. We discuss several applications of the proposed distributed techniques, including estimation, classification, and random model predictive control, and we present a numerical analysis of the performance of the proposed methods. As a complementary numerical result, we show that the parallel computation of the scenario solution using the active constraints consensus algorithm significantly outperforms its centralized equivalent.
Distributed Linearized Alternating Direction Method of Multipliers for Composite Convex Consensus Optimization Given an undirected graph G = (N, E) of agents N = {1,..., N} connected with edges in E, we study how to compute an optimal decision on which there is consensus among agents and that minimizes the sum of agent-specific private convex composite functions { i } iN , where i i + f i belongs to agent-i. Assuming only agents connected by an edge can communicate, we propose a distributed proximal gradient algorithm (DPGA) for consensus optimization over both unweighted and weighted static (undirected) communication networks. In one iteration, each agent-i computes the prox map of i and gradient of f i , and this is followed by local communication with neighboring agents. We also study its stochastic gradient variant, SDPGA, which can only access to noisy estimates of f i at each agent-i. This computational model abstracts a number of applications in distributed sensing, machine learning and statistical inference. We show ergodic convergence in both suboptimality error and consensus violation for the DPGA and SDPGA with rates O(1/t) and O(1/t), respectively.
A Smooth Double Proximal Primal-Dual Algorithm for a Class of Distributed Nonsmooth Optimization Problems This technical note studies a class of distributed nonsmooth convex consensus optimization problems. The cost function is a summation of local cost functions which are convex but nonsmooth. Each of the local cost functions consists of a twice differentiable (smooth) convex function and two lower semi-continuous (nonsmooth) convex functions. We call these problems as <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">single-smooth plus double-nonsmooth</italic> (SSDN) problems. Under mild conditions, we propose a distributed double proximal primal-dual optimization algorithm. Double proximal splitting is designed to deal with the difficulty caused by the unproximable property of the summation of those two nonsmooth functions. Besides, it can also guarantee that the proposed algorithm is locally Lipschitz continuous. An auxiliary variable in the double proximal splitting is introduced to estimate the subgradient of the second nonsmooth function. Theoretically, we conduct the convergence analysis by employing Lyapunov stability theory. It shows that the proposed algorithm can make the states achieve consensus at the optimal point. In the end, nontrivial simulations are presented and the results demonstrate the effectiveness of the proposed algorithm.
Cooperative Optimization of Dual Multiagent System for Optimal Resource Allocation In this paper, a continuous-time multiagent system is proposed for solving optimal resource allocation problems with local allocation feasible constraints. In the system, all the primal agents are divided into different groups. We use dual variables which describe the dual agents to represent the groups of the original agents. The groups of dual agents are used to communicate with others on behalf of the primal agents to reduce communication costs. That is to say, primal agents aim to seek their own optimal solutions by using local information. And dual agents represent primal agents to communicate with other agents in different groups by using the whole group information. The two kinds of agents cooperate to find the optimal solution of the problem. In this way, we only need to know the connections of dual agents to design the multiagent network, and do not need to consider the connections of the primal agents. So the communication cost and the amount of variables will be largely reduced especially for large-scale problem. Furthermore, it is proved that the multiagent system can reach consensus with respect to the dual variables. At the same time, the primal variables are convergent to the optimal solutions of the optimization problem under some certain assumptions on the communication network. For large-scale problem if we take the groups as areas, then the system is suitable for multiarea problem. Simulation results are presented to demonstrate the performance of the proposed multiagent system.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Pinning synchronization of complex dynamical networks with and without time-varying delay. The pinning synchronization in two types of complex dynamical networks are studied in this paper. In the first one, the nodes are coupled by their states; In the second one, the nodes are coupled by their past states or delayed states. By designing suitable pinning control schemes, several synchronization criteria are derived for these proposed network models. Moreover, some adaptive strategies are developed to get proper coupling strength for the first network model. For the second network model, we give several synchronization criteria by utilizing the designed pinning adaptive feedback controllers. Finally, two numerical examples indicate that complex dynamical networks with and without time-varying delay can achieve synchronization by pinning a small fraction of nodes.
Chord: a scalable peer-to-peer lookup protocol for internet applications A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.
Class-C Harmonic CMOS VCOs, With a General Result on Phase Noise A harmonic oscillator topology displaying an improved phase noise performance is introduced in this paper. Exploiting the advantages yielded by operating the core transistors in class-C, a theoretical 3.9 dB phase noise improvement compared to the standard differential-pair LC-tank oscillator is achieved for the same current consumption. Further benefits derive from the natural rejection of the tail bias current noise, and from the absence of parasitic nodes sensitive to stray capacitances. Closed-form phase-noise equations obtained from a rigorous time-variant circuit analysis are presented, as well as a time-variant study of the stability of the oscillation amplitude, resulting in simple guidelines for a reliable design. Furthermore, the analysis of phase noise is extended to encompass a general harmonic oscillator, showing that all phase noise relations previously obtained for specific LC oscillator topologies are special cases of a very general and remarkably simple result.
Replica compensated linear regulators for supply-regulated phase-locked loops Supply-regulated phase-locked loops rely upon the VCO voltage regulator to maintain a low sensitivity to supply noise and hence low overall jitter. By analyzing regulator supply rejection, we show that in order to simultaneously meet the bandwidth and low dropout requirements, previous regulator implementations used in supply-regulated PLLs suffer from unfavorable tradeoffs between power supply rejection and power consumption. We therefore propose a compensation technique that places the regulator's amplifier in a local replica feedback loop, stabilizing the regulator by increasing the amplifier bandwidth while lowering its gain. Even though the forward gain of the amplifier is reduced, supply noise affects the replica output in addition to the actual output, and therefore the amplifier's gain to reject supply noise is effectively restored. Analysis shows that for reasonable mismatch between the replica and actual loads, regulator performance is uncompromised, and experimental results from a 90 nm SOI test chip confirm that with the same power consumption, the proposed regulator achieves at least 4 dB higher supply rejection than the previous regulator design. Furthermore, simulations show that if not for other supply rejection-limiting components in the PLL, the supply rejection improvement of the proposed regulator is greater than 15 dB.
Reputation management in collaborative computing systems In collaborative systems, a set of organizations shares their computing resources, such as compute cycles, storage space or on-line services, in order to establish Virtual Organizations (VOs) aimed at achieving common tasks. The formation and operation of Virtual Organizations involve establishing trust among their members and reputation is one measure by which such trust can be quantified and reasoned about. In this paper, we contribute to research in the area of trust for collaborative computing systems along two directions: first, we provide a survey on the main reputation-based systems that fulfil the trust requirements for collaborative systems, including reputation systems designed for e-commerce, agent-based environments, and Peer-to-Peer computing and Grid-based systems. Second, we present a model for reputation management for Grid Virtual Organizations that is based on utility computing and that can be used to rate users according to their resource usage and resources and their providers according to the quality of service they deliver. We also demonstrate, through Grid simulations, how the model can be used in improving completion and welfare in Virtual Organizations. Copyright (c) 2009 John Wiley & Sons, Ltd.
Analog Filter Design Using Ring Oscillator Integrators Integrators are key building blocks in many analog signal processing circuits and systems. The DC gain of conventional opamp-RC or Gm- C integrators is severely limited by the gain of operational transconductance amplifier (OTA) used to implement them. Process scaling reduces transistor output resistance, which further exacerbates this issue. We propose applying ring oscillator integrators (ROIs) in the design of high order analog filters. ROIs implemented with simple CMOS inverters achieve infinite DC gain at low supply voltages independent of transistor non-idealities and imperfections such as finite output impedance. Consequently, ROIs scale more effectively into newer processes. A prototype fourth order filter designed using the ROIs was fabricated in 90 nm CMOS and occupies an area of 0.29 mm2. Operating with a 0.55 V supply, the filter consumes 2.9 mW power and achieves bandwidth of 7 MHz, SNR of 61.4 dB, SFDR of 67.6 dB and THD of 60.1 dB. The measured IM3 obtained by feeding two tones at 1 MHz and 2 MHz is 63.4 dB.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.055285
0.05
0.05
0.05
0.05
0.025
0.005419
0.000056
0
0
0
0
0
0
A 2.4 GHz Fractional-N Frequency Synthesizer With High-OSR ΔΣ Modulator and Nested PLL. This paper presents a nested-PLL architecture for a low-noise wide-bandwidth fractional-N frequency synthesizer. In order to reduce the quantization noise, operating frequency of ΔΣ modulator (DSM) is increased by using an intermediate output of feedback divider. A PLL which serves as an anti-alias filter is added to suppress noise aliasing caused by the divider. Prototype implemented in a 0.13 μm...
A 3.2-to-3.8GHz Calibration-Free Harmonic-Mixer-Based Dual-Feedback Fractional-N PLL Achieving –66dBc Worst-Case In-Band Fractional Spur A dual-feedback architecture for a fractional-N PLL is proposed to achieve low spurs and to suppress the phase noise degradation from the Delta-Sigma Modulator (DSM). With the assistance of 1 auxiliary PLL, the proposed architecture avoids noise amplification that occurs in conventional architectures. The feasibility of the proposed architecture is demonstrated in a calibration-free 3.2-to-3.8GHz analog fractional-N PLL that achieves -69dBc out-of-band spur and -66dBc worst-case in-band fractional spur.
A 2.4-GHz 6.4-mW fractional-N inductorless RF synthesizer. A cascaded synthesizer architecture incorporates a digital delay-line-based filter and an analog noise trap to suppress the quantization noise of the ΣA modulator. Operating with a reference frequency of 22.6 MHz, the synthesizer achieves a bandwidth of 10 MHz in the first loop and 12 MHz in the second, heavily suppressing the phase noise of its constituent ring oscillators. Realized in 45-nm digi...
Efficient dithering in MASH sigma-delta modulators for fractional frequency synthesizers The digital multistage-noise-shaping (MASH) ΣΔ modulators used in fractional frequency synthesizers are prone to spur tone generation in their output spectrum. In this paper, the state of the art on spur-tone-magnitude reduction is used to demonstrate that an M-bit MASH architecture dithered by a simple M-bit linear feedback shift register (LFSR) can be as effective as more sophisticated topologies if the dither signal is properly added. A comparison between the existent digital ΣΔ modulators used in fractional synthesizers is presented to demonstrate that the MASH architecture has the best tradeoff between complexity and quantization noise shaping, but they present spur tones. The objective of this paper was to significantly decrease the area of the circuit used to reduce the spur tone magnitude for these MASH topologies. The analysis is validated with a theoretical study of the paths where the dither signal can be added. Experimental results of a digital M-bit MASH 1-1-1 ΣΔ modulator with the proposed way to add the LFSR dither are presented to make a hardware comparison.
21.1 A 1.7GHz MDLL-based fractional-N frequency synthesizer with 1.4ps RMS integrated jitter and 3mW power using a 1b TDC The introduction of inductorless frequency synthesizers into standardized wireless systems still requires a high level of innovation in order to achieve the stringent requirements of low noise and low power consumption. Synthesizers based on the so-called multiplying delay-locked loop (MDLL) represent one of the most promising architectures in this direction [1-3]. An MDLL resembles a ring oscillator, in which the signal edge traveling along the delay line is periodically refreshed by a clean edge of the reference clock. In this manner, the phase noise of the ring oscillator is filtered up to half the reference frequency and the total output jitter is reduced significantly. Unfortunately, the concept of MDLL, and in general of injection locking (IL), is inherently limited to integer-N synthesis, which makes it unacceptable in practical RF systems. A first extension of injection locking to coarse fractional-N resolution has been shown in [4], in which however the fractional resolution is bounded to the inverse of the number of ring-oscillator delay stages. This paper introduces a fractional-N MDLL-based frequency synthesizer with a 1b time/digital converter (TDC), which is able to outreach the performance of inductorless fractional-N synthesizers. The prototype synthesizes frequencies between 1.6 and 1.9GHz with 190Hz resolution and achieves RMS integrated jitter of 1.4ps at 3mW power consumption, even in the worst-case of near-integer channel.
A wideband 2.4-GHz delta-sigma fractional-NPLL with 1-Mb/s in-loop modulation A phase noise cancellation technique and a charge pump linearization technique, both of which are insensitive to component errors, are presented and demonstrated as enabling components in a wideband CMOS delta-sigma fractional-N phase-locked loop (PLL). The PLL has a loop bandwidth of 460 kHz and is capable of 1-Mb/s in- loop FSK modulation at center frequencies of 2402 + k MHz for k = 0, 1, 2, .....
A study of injection locking and pulling in oscillators Abstract, Injection locking characteristics of oscillators are de-rived and a graphical analysis is presented that describes injection pulling in time and frequency domains. An identity obtained from phase and envelope equations is used to express the requisite os-cillator nonlinearity and interpret phase noise reduction. The be-havior of phase-locked oscillators under injection pulling is also formulated. Index Terms, Adler's equation, injection locking, injection pulling, oscillator nonlinearity, oscillator pulling, quadrature oscillators.
Low-power area-efficient high-speed I/O circuit techniques We present a 4-Gb/s I/O circuit that fits in 0.1-mm/sup 2/ of die area, dissipates 90 mW of power, and operates over 1 m of 7-mil 0.5-oz PCB trace in a 0.25-/spl mu/m CMOS technology. Swing reduction is used in an input-multiplexed transmitter to provide most of the speed advantage of an output-multiplexed architecture with significantly lower power and area. A delay-locked loop (DLL) using a supply-regulated inverter delay line gives very low jitter at a fraction of the power of a source-coupled delay line-based DLL. Receiver capacitive offset trimming decreases the minimum resolvable swing to 8 mV, greatly reducing the transmission energy without affecting the performance of the receive amplifier. These circuit techniques enable a high level of I/O integration to relieve the pin bandwidth bottleneck of modern VLSI chips.
Gradient-Based Learning Applied to Document Recognition Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper rev...
Control-flow integrity principles, implementations, and applications Current software attacks often build on exploits that subvert machine-code execution. The enforcement of a basic safety property, control-flow integrity (CFI), can prevent such attacks from arbitrarily controlling program behavior. CFI enforcement is simple and its guarantees can be established formally, even with respect to powerful adversaries. Moreover, CFI enforcement is practical: It is compatible with existing software and can be done efficiently using software rewriting in commodity systems. Finally, CFI provides a useful foundation for enforcing further security policies, as we demonstrate with efficient software implementations of a protected shadow call stack and of access control for memory regions.
Implementing aggregation and broadcast over Distributed Hash Tables Peer-to-peer (P2P) networks represent an effective way to share information, since there are no central points of failure or bottleneck. However, the flip side to the distributive nature of P2P networks is that it is not trivial to aggregate and broadcast global information efficiently. We believe that this aggregation/broadcast functionality is a fundamental service that should be layered over existing Distributed Hash Tables (DHTs), and in this work, we design a novel algorithm for this purpose. Specifically, we build an aggregation/broadcast tree in a bottom-up fashion by mapping nodes to their parents in the tree with a parent function. The particular parent function family we propose allows the efficient construction of multiple interior-node-disjoint trees, thus preventing single points of failure in tree structures. In this way, we provide DHTs with an ability to collect and disseminate information efficiently on a global scale. Simulation results demonstrate that our algorithm is efficient and robust.
A 2.87±0.19dB NF 3.1∼10.6GHz ultra-wideband low-noise amplifier using 0.18µm CMOS technology.
16.7 A 20V 8.4W 20MHz four-phase GaN DC-DC converter with fully on-chip dual-SR bootstrapped GaN FET driver achieving 4ns constant propagation delay and 1ns switching rise time Recently, the demand for miniaturized and fast transient response power delivery systems has been growing in high-voltage industrial electronics applications. Gallium Nitride (GaN) FETs showing a superior figure of merit (Rds, ON X Qg) in comparison with silicon FETs [1] can enable both high-frequency and high-efficiency operation in these applications, thus making power converters smaller, faster and more efficient. However, the lack of GaN-compatible high-speed gate drivers is a major impediment to fully take advantage of GaN FET-based power converters. Conventional high-voltage gate drivers usually exhibit propagation delay, tdelay, of up to several 10s of ns in the level shifter (LS), which becomes a critical problem as the switching frequency, fsw, reaches the 10MHz regime. Moreover, the switching slew rate (SR) of driving GaN FETs needs particular care in order to maintain efficient and reliable operation. Driving power GaN FETs with a fast SR results in large switching voltage spikes, risking breakdown of low-Vgs GaN devices, while slow SR leads to long switching rise time, tR, which degrades efficiency and limits fsw. In [2], large tdelay and long tR in the GaN FET driver limit its fsw to 1MHz. A design reported in [3] improves tR to 1.2ns, thereby enabling fsw up to 10MHz. However, the unregulated switching dead time, tDT, then becomes a major limitation to further reduction of tde!ay. This results in limited fsw and narrower range of VIN-VO conversion ratio. Interleaved multiphase topologies can be the most effective way to increase system fsw. However, each extra phase requires a capacitor for bootstrapped (BST) gate driving which incurs additional cost and complexity of the PCB design. Moreover, the requirements of fsw synchronization and balanced - urrent sharing for high fsw operation in multiphase implementation are challenging.
Multi-Channel Neural Recording Implants: A Review. The recently growing progress in neuroscience research and relevant achievements, as well as advancements in the fabrication process, have increased the demand for neural interfacing systems. Brain-machine interfaces (BMIs) have been revealed to be a promising method for the diagnosis and treatment of neurological disorders and the restoration of sensory and motor function. Neural recording implants, as a part of BMI, are capable of capturing brain signals, and amplifying, digitizing, and transferring them outside of the body with a transmitter. The main challenges of designing such implants are minimizing power consumption and the silicon area. In this paper, multi-channel neural recording implants are surveyed. After presenting various neural-signal features, we investigate main available neural recording circuit and system architectures. The fundamental blocks of available architectures, such as neural amplifiers, analog to digital converters (ADCs) and compression blocks, are explored. We cover the various topologies of neural amplifiers, provide a comparison, and probe their design challenges. To achieve a relatively high SNR at the output of the neural amplifier, noise reduction techniques are discussed. Also, to transfer neural signals outside of the body, they are digitized using data converters, then in most cases, the data compression is applied to mitigate power consumption. We present the various dedicated ADC structures, as well as an overview of main data compression methods.
1.052344
0.05
0.036458
0.016667
0.015167
0.002809
0.000165
0.000001
0
0
0
0
0
0
Spike Sorting: The First Step in Decoding the Brain: The first step in decoding the brain In this article, we present an overview of the spike-sorting problem, its current solutions, and the challenges that remain. Because of the increasing demand for chronically implanted spike-sorting hardware, we will also discuss implementation considerations.
An Integrated Passive Phase-Shift Keying Modulator for Biomedical Implants With Power Telemetry Over a Single Inductive Link. This paper presents a passive phase-shift keying (PPSK) modulator for uplink data transmission for biomedical implants with simultaneous power and data transmission over a single 13.56 MHz inductive link. The PPSK modulator provides a data rate up to 1.35 Mbps with a modulation index between 3% and 38% for a variation of the coupling coefficient between 0.05 and 0.26. This modulation scheme is par...
A Neural Probe With Up to 966 Electrodes and Up to 384 Configurable Channels in 0.13 $\mu$m SOI CMOS. In vivo recording of neural action-potential and local-field-potential signals requires the use of high-resolution penetrating probes. Several international initiatives to better understand the brain are driving technology efforts towards maximizing the number of recording sites while minimizing the neural probe dimensions. We designed and fabricated (0.13-μm SOI Al CMOS) a 384-channel configurabl...
A Wireless Power and Data Transfer IC for Neural Prostheses Using a Single Inductive Link With Frequency-Splitting Characteristic This paper presents a frequency-splitting-based wireless power and data transfer IC that simultaneously delivers power and forward data over a single inductive link. For data transmission, frequency-shift keying (FSK) is utilized because the FSK modulation scheme supports continuous wireless power transmission without disruption of the carrier amplitude. Moreover, the link that manifests the frequ...
Design of a Bone-Guided Cochlear Implant Microsystem With Monopolar Biphasic Multiple Stimulations and Evoked Compound Action Potential Acquisition and Its In Vivo Verification A CMOS bone-guided cochlear implant (BGCI) microsystem is proposed and verified. In the implanted System on Chip (SoC) of the proposed BGCI, the evoked compound action potential (ECAP) acquisition and electrode–tissue impedance measurement (EAEIM) circuit is integrated to measure both ECAP and electrode–tissue impedance for clinical diagnoses. Both positive-/negative-voltage charge pumps and monop...
Feasibility Study on Active Back Telemetry and Power Transmission Through an Inductive Link for Millimeter-Sized Biomedical Implants. This paper presents a feasibility study of wireless power and data transmission through an inductive link to a 1-mm2 implant, to be used as a free-floating neural probe, distributed across a brain area of interest. The proposed structure utilizes a four-coil inductive link for back telemetry, shared with a three-coil link for wireless power transmission. We propose a design procedure for geometric...
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Measuring the Gap Between FPGAs and ASICs ABSTRACT This paper presents experimental measurements of the differences between a 90nm CMOS FPGA and 90nm CMOS Standard Cell ASICs in terms of logic density, circuit speed and power consumption. We are motivated to make these measurements to enable system designers to make better informed choices between these two media and to give insight to FPGA makers on the deciencies to attack and thereby improve FPGAs. In the paper, we describe the methodology by which the measurements were obtained and we show that, for circuits containing only combinational logic and,ipops, the ratio of silicon area required to implement them in FPGAs and ASICs is on average 40. Modern FPGAs also contain \hard" blocks such as multiplier/accumulators and block memories,and we nd,that these blocks reduce this average area gap signican tly to as little as 21. The ratio of critical path delay, from FPGA to ASIC, is roughly 3 to 4, with less inuence from block memory and hard multipliers. The dynamic power consumption ratio is approximately 12 times and, with hard blocks, this gap generally becomes smaller. Categories and Subject Descriptors
Language-based information-flow security Current standard security practices do not provide substantial assurance that the end-to-end behavior of a computing system satisfies important security policies such as confidentiality. An end-to-end confidentiality policy might assert that secret input data cannot be inferred by an attacker through the attacker's observations of system output; this policy regulates information flow. Conventional security mechanisms such as access control and encryption do not directly address the enforcement of information-flow policies. Previously, a promising new approach has been developed: the use of programming-language techniques for specifying and enforcing information-flow policies. In this paper, we survey the past three decades of research on information-flow security, particularly focusing on work that uses static program analysis to enforce information-flow policies. We give a structured view of work in the area and identify some important open challenges.
Fundamental control algorithms in mobile networks In this work we propose simple and efficient protocols forcounting and leader election in mobile networks. For mobilenetworks with fixed base stations we provide a new andvery efficient protocol for counting the number of mobilehosts. The main part of the work concentrates on ad-hocnetworks (no fixed subnetwork). We provide a model forthese networks and leader election (and a special form ofcounting) protocols for both named and anonymous mobilehosts. In this work we define two protocol ...
Bundled execution of recurring traces for energy-efficient general purpose processing Technology scaling has delivered on its promises of increasing device density on a single chip. However, the voltage scaling trend has failed to keep up, introducing tight power constraints on manufactured parts. In such a scenario, there is a need to incorporate energy-efficient processing resources that can enable more computation within the same power budget. Energy efficiency solutions in the past have typically relied on application specific hardware and accelerators. Unfortunately, these approaches do not extend to general purpose applications due to their irregular and diverse code base. Towards this end, we propose BERET, an energy-efficient co-processor that can be configured to benefit a wide range of applications. Our approach identifies recurring instruction sequences as phases of "temporal regularity" in a program's execution, and maps suitable ones to the BERET hardware, a three-stage pipeline with a bundled execution model. This judicious off-loading of program execution to a reduced-complexity hardware demonstrates significant savings on instruction fetch, decode and register file accesses energy. On average, BERET reduces energy consumption by a factor of 3-4X for the program regions selected across a range of general-purpose and media applications. The average energy savings for the entire application run was 35% over a single-issue in-order processor.
A 93% efficiency reconfigurable switched-capacitor DC-DC converter using on-chip ferroelectric capacitors.
A 10-Bit 800-MHz 19-mW CMOS ADC A pipelined ADC employs charge-steering op amps to relax the trade-offs among speed, noise, and power consumption. Applying full-rate nonlinearity and gain error calibration, a prototype realized in 65-nm CMOS technology achieves an SNDR of 52.2 dB at an input frequency of 399.2MHz and an FoM of 53 fJ/conversion-step.
A Data-Compressive Wired-OR Readout for Massively Parallel Neural Recording. Neural interfaces of the future will be used to help restore lost sensory, motor, and other capabilities. However, realizing this futuristic promise requires a major leap forward in how electronic devices interface with the nervous system. Next generation neural interfaces must support parallel recording from tens of thousands of electrodes within the form factor and power budget of a fully implan...
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
Reconstruction of Two-Periodic Nonuniformly Sampled Band-Limited Signals Using a Discrete-Time Differentiator and a Time-Varying Multiplier This brief considers the problem of reconstructing a band-limited signal from its two-periodic nonuniformly spaced samples. We propose a novel reconstruction system where a finite-impulse response filter designed as differentiator followed by a time-varying multiplier recovers the uniformly spaced from the nonuniformly spaced samples. The system roughly doubles the signal-to-noise ratio with relat...
Seven-bit 700-MS/s Four-Way Time-Interleaved SAR ADC With Partial $V_{\mathrm {cm}}$ -Based Switching. This brief presents a 7-bit 700-MS/s four-way time-interleaved successive approximation register (SAR) analog-to-digital converter (ADC). A partial Vcm-based switching method is proposed that requires less digital overhead from the SAR controller and achieves better conversion accuracy. Compared with switchback switching, the proposed method can further reduce the common mode variation by 50%. In ...
Adaptive Blind Timing Mismatch Calibration with Low Power Consumption in M-Channel Time-Interleaved ADC. This paper proposes an adaptive blind calibration scheme to minimize timing mismatch in M-channel time-interleaved analog-to-digital converter (TIADC). By using a derivative filter, the timing mismatch can be calibrated with the coefficients estimated by calculating the average value of the relative timing errors for all sub-ADCs. The coefficients can be estimated by utilizing the filtered-X least-mean square algorithm. The main advantages of the proposed method are that the difficulty of the implementation and the power consumption can be reduced dramatically. Simulation results show the effectiveness of the proposed calibration technique. The design is synthesized on the TSMC-55 nm technology, and the estimated power consumption of digital part is about 4.27 mW with 1.2 V supply voltage. The measurement results of the FPGA validate system show that the proposed calibration can improve the SNR of a four-channel 400 MHz 14-b real TIADC system from 39.82 to 65.13 dB.
First Order Statistic Based Fast Blind Calibration of Time Skews for Time-Interleaved ADCs A full digital background method is proposed in this brief for timing mismatch calibrations of time-interleaved analog-to-digital converters (TIADCs) with wide sense stationary input signals. Firstly, errors caused by timing mismatches are modeled as additive errors by applying the Taylor series approximation. Then, a mismatch estimation approach based on first-order statistics is proposed. In addition, computations are simplified significantly by using some valuable properties of the signals in TIADC channels. Further, a variable step-size iterative technique is presented to reduce steady-state errors. So the mismatch estimates are guaranteed to converge to actual values within three steps iterations. Theoretical analyses and experiments show that the proposed method has the advantages of low complexity and fast convergence over most other conventional methods. The proposed method is feasible for fast on-line timing mismatch calibrations of TIADCs.
Novel adaptive blind calibration technique of time-skew mismatches for any channel time-interleaved analogue-to-digital converters This article presented a novel digital blind calibration technique of time-skew mismatches for time-interleaved analogue-to-digital converter (TI-ADC). Based on the frequency-shifted and derived operation, the spurious signals could be reconstructed and subtracted from the sampled signal adaptively. The main advantage of the proposed calibration technique is applicable to any channel TI-ADC and could achieve higher performance in comparison with the state-of-the-arts. Numerical simulations and experimental results have demonstrated that the proposed calibration technique could significantly improve the signal to noise and distortion ratio (SNDR) and spurious-free dynamic range (SFDR) of the TI-ADC system.
Adaptive Calibration of Channel Mismatches in Time-Interleaved ADCs Based on Equivalent Signal Recombination In this paper, we present an adaptive calibration method for correcting channel mismatches in time-interleaved analog-to-digital converters (TIADCs). An equivalent signal recombination structure is proposed to eliminate aliasing components when the input signal bandwidth is greater than the Nyquist bandwidth of the sub-ADCs in a TIADC. A band-limited pseudorandom noise sequence is employed as the desired output of the TIADC and simultaneously is also converted into an analog signal, which is injected into the TIADC as the training signal during the calibration process. Adaptive calibration filters with parallel structures are used to optimize the TIADC output to the desired output. The main advantage of the proposed method is that it avoids a complicated error estimation or measurement and largely reduces the computational complexity. A four-channel 12-bit 400-MHz TIADC and its calibration algorithm are implemented by hardware, and the measured spurious-free dynamic range is greater than 76 dB up 90% of the entire Nyquist band. The hardware implementation cost can be dramatically reduced, especially in instrumentation or measurement equipment applications, where special calibration phases and system stoppages are common.
A 4.5-mW 8-b 750-MS/s 2-b/step asynchronous subranged SAR ADC in 28-nm CMOS technology A 8-b 2-b/step asynchronous subranged SAR ADC is presented. It incorporates subranging technique to obtain fast reference settling for MSB conversion. The capacitive interpolation reduces number of NMOS switches and lowers matching requirement of a resistive DAC. The proposed timing scheme avoids the need of specific duty cycle of external clock for defining sampling period in a conventional asynchronous SAR ADC. Operating at 750 MS/s, this ADC consumes 4.5 mW from 1-V supply, achieves ENOB of 7.2 and FOM of 41 fJ/conversion-step. It is fabricated in 28-nm CMOS technology and occupies an active area of 0.004 mm2.
An Introduction To Compressive Sampling Conventional approaches to sampling signals or images follow Shannon&#39;s theorem: the sampling rate must be at least twice the maximum frequency present in the signal (Nyquist rate). In the field of data conversion, standard analog-to-digital converter (ADC) technology implements the usual quantized Shannon representation - the signal is uniformly sampled at or above the Nyquist rate. This article s...
Multi-Phase 1 GHz Voltage Doubler Charge Pump in 32 nm Logic Process A multi-phase 1 GHz charge pump in 32 nm logic process demonstrates a compact area (159 × 42 ¿m2) for boosting supply voltage from twice the threshold voltage (2 Vth) to 3-4 Vth. Self contained clocking with metal-finger flying capacitors enable embedding voltage boost functionality in close proximity to digital logic for supplying low current Vmin requirement of state elements in logic blocks. Multi-phase operation with phase separation of the order of buffer delays avoids the need for a large storage reservoir capacitor. Special configuration of the pump stages to work in parallel enables a fast (5 ns) output transition from disable to enable state. The multi-phase pump operated as a 1 V to 2 V doubler with >5 mA output capability addresses the need for a gated power delivery solution for logic blocks having state-preservation Vmin requirements.
Solving the find-path problem by good representation of free space Free space is represented as a union of (possibly overlapping) generalized cones. An algorithm is presented which efficiently finds good collision-free paths for convex polygonal bodies through space littered with obstacle polygons. The paths are good in the sense that the distance of closest approach to an obstacle over the path is usually far from minimal over the class of topologically equivalent collision-free paths. The algorithm is based on characterizing the volume swept by a body as it is translated and rotated as a generalized cone, and determining under what conditions one generalized cone is a subset of another.
A capacitor-free CMOS low-dropout regulator with damping-factor-control frequency compensation A 1.5-V 100-mA capacitor-free CMOS low-dropout regulator (LDO) for system-on-chip applications to reduce board space and external pins is presented. By utilizing damping-factor-control frequency compensation on the advanced LDO structure, the proposed LDO provides high stability, as well as fast line and load transient responses, even in capacitor-free operation. The proposed LDO has been implemented in a commercial 0.6-μm CMOS technology, and the active chip area is 568 μm×541 μm. The total error of the output voltage due to line and load variations is less than ±0.25%, and the temperature coefficient is 38 ppm/°C. Moreover, the output voltage can recover within 2 μs for full load-current changes. The power-supply rejection ratio at 1 MHz is -30 dB, and the output noise spectral densities at 100 Hz and 100 kHz are 1.8 and 0.38 μV/√Hz, respectively.
An Electro-Magnetic Energy Harvesting System With 190 nW Idle Mode Power Consumption for a BAW Based Wireless Sensor Node. State-of-the-art wireless sensor nodes are mostly supplied by batteries. Such systems have the disadvantage that they are not maintenance free because of the limited lifetime of batteries. Instead, wireless sensor nodes or related devices can be remotely powered. To increase the operating range and applicability of these remotely powered devices an electro-magnetic energy harvester is developed in a 0.13 mu m low cost CMOS technology. This paper presents an energy harvesting system that converts RF power to DC power to supply wireless sensor nodes, active transmitters or related systems with a power consumption up to the mW range. This energy harvesting system is used to power a wireless sensor node from the 900 MHz RF field. The wireless sensor node includes an on-chip temperature sensor and a bulk acoustic wave (BAW) based transmitter. The BAW resonator reduces the startup time of the transmitter to about 2 mu s which reduces the amount of energy needed in one transmission cycle. The maximum output power of the transmitter is 5.4 dBm. The chip contains an ultra-low-power control unit and consumes only 190 nW in idle mode. The required input power is -19.7 dBm.
Formal Analysis of Leader Election in MANETs Using Real-Time Maude.
A Bidirectional Neural Interface IC With Chopper Stabilized BioADC Array and Charge Balanced Stimulator. We present a bidirectional neural interface with a 4-channel biopotential analog-to-digital converter (bioADC) and a 4-channel current-mode stimulator in 180 nm CMOS. The bioADC directly transduces microvolt biopotentials into a digital representation without a voltage-amplification stage. Each bioADC channel comprises a continuous-time first-order ΔΣ modulator with a chopper-stabilized OTA input ...
1.046098
0.04
0.04
0.04
0.04
0.020889
0.002858
0.000005
0
0
0
0
0
0
A 0.4 V 63 $\mu$W 76.1 dB SNDR 20 kHz Bandwidth Delta-Sigma Modulator Using a Hybrid Switching Integrator. This paper presents a delta-sigma modulator operating at a supply voltage of 0.4 V. The designed delta-sigma modulator uses a proposed hybrid switching integrator and operates at a low supply voltage without clock boosting or bootstrapped switches. The proposed integrator consists of both switched-resistor and switched-capacitor operations and significantly reduces distortion at a low supply volta...
Signal Folding in A/D Converters Signal folding appears in A/D converters (ADCs) in various ways. In this paper, the evolution of this technique is derived from the fundamentals of quantization to obtain systematic insights. We look upon folding as an automatic multiplexing of zero crossings, which simplifies hardware while preserving the high speed and low latency of a flash ADC. By appreciating similarities between the well-kno...
A 45 nm Resilient Microprocessor Core for Dynamic Variation Tolerance A 45 nm microprocessor core integrates resilient error-detection and recovery circuits to mitigate the clock frequency (FCLK) guardbands for dynamic parameter variations to improve throughput and energy efficiency. The core supports two distinct error-detection designs, allowing a direct comparison of the relative trade-offs. The first design embeds error-detection sequential (EDS) circuits in critical paths to detect late timing transitions. In addition to reducing the Fclk guardbands for dynamic variations, the embedded EDS design can exploit path-activation rates to operate the microprocessor faster than infrequently-activated critical paths. The second error-detection design offers a less-intrusive approach for dynamic timing-error detection by placing a tunable replica circuit (TRC) per pipeline stage to monitor worst-case delays. Although the TRCs require a delay guardband to ensure the TRC delay is always slower than critical-path delays, the TRC design captures most of the benefits from the embedded EDS design with less implementation overhead. Furthermore, while core min-delay constraints limit the potential benefits of the embedded EDS design, a salient advantage of the TRC design is the ability to detect a wider range of dynamic delay variation, as demonstrated through low supply voltage (VCC) measurements. Both error-detection designs interface with error-recovery techniques, enabling the detection and correction of timing errors from fast-changing variations such as high-frequency VCC droops. The microprocessor core also supports two separate error-recovery techniques to guarantee correct execution even if dynamic variations persist. The first technique requires clock control to replay errant instructions at 1/2FCLK. In comparison, the second technique is a new multiple-issue instruction replay design that corrects errant instructions with a lower performance penalty and without requiring clock control. Silico- - n measurements demonstrate that resilient circuits enable a 41% throughput gain at equal energy or a 22% energy reduction at equal throughput, as compared to a conventional design when executing a benchmark program with a 10% VCC droop. In addition, the microprocessor includes a new adaptive clock control circuit that interfaces with the resilient circuits and a phase-locked loop (PLL) to track recovery cycles and adapt to persistent errors by dynamically changing Fclk f°Γ maximum efficiency.
A Mostly Digital VCO-Based CT-SDM With Third-Order Noise Shaping. This paper presents the architectural concept and implementation of a mostly digital voltage-controlled oscillator-analog-to-digital converter (VCO-ADC) with third-order quantization noise shaping. The system is based on the combination of a VCO and a digital counter. It is shown how this combination can function as a continuous-time integrator to form a high-order continuous-time sigma-delta modu...
A 0.5-V 1.6-mW 2.4-GHz Fractional-N All-Digital PLL for Bluetooth LE With PVT-Insensitive TDC Using Switched-Capacitor Doubler in 28-nm CMOS. This paper proposes an ultra-low-voltage (ULV) fractional-N all-digital PLL (ADPLL) powered from a single 0.5-V supply. While its digitally controlled oscillator (DCO) runs directly at 0.5 V, an internal switched-capacitor dc-dc converter “doubles” the supply voltage to all the digital circuitry and particularly regulates the time-to-digital converter (TDC) supply to stabilize its resolution, thus...
An Ultra-Low-Voltage 160 MS/s 7 Bit Interpolated Pipeline ADC Using Dynamic Amplifiers This paper presents a 0.55 V, 7 bit, 160 MS/s pipeline ADC using dynamic amplifiers. In this ADC, high-speed open-loop dynamic amplifiers with a common-mode detection technique are used as residue amplifiers to increase the ADC's speed, to enhance the robustness against supply voltage scaling, and to realize clock-scalable power consumption. To mitigate the absolute gain constraint of the residue amplifiers in a pipeline ADC, the interpolated pipeline architecture is employed to shift the gain requirement from absolute to relative accuracy. To show the new requirements of the residue amplifiers, the effects of gain mismatch and nonlinearity of the dynamic amplifiers are analyzed. The 7 bit prototype ADC fabricated in 90 nm CMOS demonstrates an ENOB of 6.0 bits at a conversion rate of 160 MS/s with an input close to the Nyquist frequency. At this conversion rate, it consumes 2.43 mW from a 0.55 V supply. The resulting FoM of the ADC is 240 fJ/conversion-step.
A 300- $\mu\text{W}$ Audio $\Delta\Sigma$ Modulator With 100.5-dB DR Using Dynamic Bias Inverter This paper presents a micropower audio delta-sigma (ΔΣ) modulator for mobile applications. This work employs power-efficient integrators based on the dynamic bias inverter, which consists of a cascode inverter, a floating current source and two offset-storage capacitors. The quiescent current of the inverter is copied from the floating current via offset-storage capacitors and the speed limitation...
A 108dB DR -μM Front-End with 720mVpp Input Range and >300mV Offset Removal for Multi-Parameter Biopotential Recording. The recording of biopotential signals using techniques such as electroencephalography (EEG) and electrocardiography (ECG) poses important challenges to the design of the front-end readout circuits in terms of noise, electrode DC offset cancellation and motion artifact tolerance. In this paper, we present a 2nd-order hybrid-CTDT Δ∑-∑ modulator front-end architecture that tackles these challenges by...
Fog computing and its role in the internet of things Fog Computing extends the Cloud Computing paradigm to the edge of the network, thus enabling a new breed of applications and services. Defining characteristics of the Fog are: a) Low latency and location awareness; b) Wide-spread geographical distribution; c) Mobility; d) Very large number of nodes, e) Predominant role of wireless access, f) Strong presence of streaming and real time applications, g) Heterogeneity. In this paper we argue that the above characteristics make the Fog the appropriate platform for a number of critical Internet of Things (IoT) services and applications, namely, Connected Vehicle, Smart Grid, Smart Cities, and, in general, Wireless Sensors and Actuators Networks (WSANs).
The Information Structure of Indulgent Consensus To solve consensus, distributed systems have to be equipped with oracles such as a failure detector, a leader capability, or a random number generator. For each oracle, various consensus algorithms have been devised. Some of these algorithms are indulgent toward their oracle in the sense that they never violate consensus safety, no matter how the underlying oracle behaves. This paper presents a simple and generic indulgent consensus algorithm that can be instantiated with any specific oracle and be as efficient as any ad hoc consensus algorithm initially devised with that oracle in mind. The key to combining genericity and efficiency is to factor out the information structure of indulgent consensus executions within a new distributed abstraction, which we call "Lambda.驴 Interestingly, identifying this information structure also promotes a fine-grained study of the inherent complexity of indulgent consensus. We show that instantiations of our generic algorithm with specific oracles, or combinations of them, match lower bounds on oracle-efficiency, zero-degradation, and one-step-decision. We show, however, that no leader or failure detector-based consensus algorithm can be, at the same time, zero-degrading and configuration-efficient. Moreover, we show that leader-based consensus algorithms that are oracle-efficient are inherently zero-degrading, but some failure detector-based consensus algorithms can be both oracle-efficient and configuration-efficient. These results highlight some of the fundamental trade offs underlying each oracle,
Information Spreading in Stationary Markovian Evolving Graphs Markovian evolving graphs are dynamic-graph models where the links among a fixed set of nodes change during time according to an arbitrary Markovian rule. They are extremely general and they can well describe important dynamic-network scenarios. We study the speed of information spreading in the stationary phase by analyzing the completion time of the flooding mechanism. We prove a general theorem that establishes an upper bound on flooding time in any stationary Markovian evolving graph in terms of its node-expansion properties. We apply our theorem in two natural and relevant cases of such dynamic graphs. Geometric Markovian evolving graphs where the Markovian behaviour is yielded by n mobile radio stations, with fixed transmission radius, that perform independent random walks over a square region of the plane. Edge-Markovian evolving graphs where the probability of existence of any edge at time t depends on the existence (or not) of the same edge at time t-1. In both cases, the obtained upper bounds hold with high probability and they are nearly tight. In fact, they turn out to be tight for a large range of the values of the input parameters. As for geometric Markovian evolving graphs, our result represents the first analytical upper bound for flooding time on a class of concrete mobile networks.
Feature selection for medical diagnosis: Evaluation for cardiovascular diseases Machine learning has emerged as an effective medical diagnostic support system. In a medical diagnosis problem, a set of features that are representative of all the variations of the disease are necessary. The objective of our work is to predict more accurately the presence of cardiovascular disease with reduced number of attributes. We investigate intelligent system to generate feature subset with improvement in diagnostic performance. Features ranked with distance measure are searched through forward inclusion, forward selection and backward elimination search techniques to find subset that gives improved classification result. We propose hybrid forward selection technique for cardiovascular disease diagnosis. Our experiment demonstrates that this approach finds smaller subsets and increases the accuracy of diagnosis compared to forward inclusion and back-elimination techniques.
An Ultra-Low Power Fully Integrated Energy Harvester Based on Self-Oscillating Switched-Capacitor Voltage Doubler This paper presents a fully integrated energy harvester that maintains >35% end-to-end efficiency when harvesting from a 0.84 mm 2 solar cell in low light condition of 260 lux, converting 7 nW input power from 250 mV to 4 V. Newly proposed self-oscillating switched-capacitor (SC) DC-DC voltage doublers are cascaded to form a complete harvester, with configurable overall conversion ratio from 9× to 23×. In each voltage doubler, the oscillator is completely internalized within the SC network, eliminating clock generation and level shifting power overheads. A single doubler has >70% measured efficiency across 1 nA to 0.35 mA output current ( >10 5 range) with low idle power consumption of 170 pW. In the harvester, each doubler has independent frequency modulation to maintain its optimum conversion efficiency, enabling optimization of harvester overall conversion efficiency. A leakage-based delay element provides energy-efficient frequency control over a wide range, enabling low idle power consumption and a wide load range with optimum conversion efficiency. The harvester delivers 5 nW-5 μW output power with >40% efficiency and has an idle power consumption 3 nW, in test chip fabricated in 0.18 μm CMOS technology.
Charge-redistribution based quadratic operators for neural feature extraction. This paper presents a SAR converter based mixed-signal multiplier for the feature extraction of neural signals using quadratic operators. After a thorough analysis of design principles and circuit-level aspects, the proposed architecture is explored for the implementation of two quadratic operators often used for the characterization of neural activity, the moving average energy (MAE) operator and...
1.11
0.12
0.12
0.12
0.12
0.1
0.05
0.001667
0
0
0
0
0
0
High-Resolution SAR ADC With Enhanced Linearity. This brief proposes two digital-to-analog converter switching techniques for binary-weighted capacitor array successive approximation register (SAR) analog-to-digital converter (ADC), rotating&averaging without redundancy technique and rotating&averaging with redundancy technique. The rotating&averaging without redundancy technique can improve the signal-to-noise ratio (SNR) and spurious free dyna...
High resolution and linearity enhanced SAR ADC for wearable sensing systems This paper presents linearity enhancement capacitor re-configuring technique to improve the Spurious Free Dynamic Range (SFDR) and Signal-to-Noise-and-Distortion Ratio (SNDR) of ADC simultaneously without sacrificing the sampling rate in a 14-bit successive approximation register (SAR) analog-to-digital converter (ADC) for wearable electronics application. Behavioural Monte-Carlo simulations are presented to demonstrate the effect of the proposed method where no complex least-mean-square (LMS) algorithm. Simulation results show that with a mismatch error typical of modern technology, the SFDR is enhanced by about 18 dB and the SNDR is 15 dB better with the proposed technique for a 14-bit SAR ADC, which makes it suitable for accurate and linear smart sensor nodes in wearable sensing systems.
A 10b 1.6GS/s 12.2mW 7/8-way Split Time-interleaved SAR ADC with Digital Background Mismatch Calibration This paper presents a split time-interleaved (TI) successive-approximation register (SAR) analog-to-digital converter (ADC) with digital background mismatch calibration. Benefitting from the proposed split TI topology, the mismatch calibration convergence speed is fast without any extra analog circuits. A prototype 10-b 1.6-GS/s 7/8-way split TI-SAR ADC in 28-nm CMOS achieves 54.2dB SNDR at Nyquist rate with a 2.5GHz 3-dB bandwidth, while the power consumption is 12.2mW leading to a Walden FOM of 18.2 fJ per conversion step.
A 102dB-SFDR 16-bit Calibration-Free SAR ADC in 180-nm CMOS A 16-bit successive approximation register analog-to-digital converter (ADC) is presented achieving superior spurious-free dynamic range (SFDR). This ADC uses VCM-based and binary-window digital-to-analog converter (DAC) switching schemes to improve the signal-to-noise and distortion ratio (SNDR). Moreover, a level-2 capacitor swapping scheme is proposed to improve the DAC linearity. A prototype ADC is fabricated in 180-nm CMOS and occupies an active area of 0.52 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> . At 500 kS/s, it consumes a total power of 633 μW from a supply of 1.5 V. The measured differential and integral nonlinearity are -0.65/+0.9 and -2.7/+2.5 LSB, respectively. With 1 kHz input, the measured SNDR and SFDR are 77.9 dB and 102 dB, respectively. The effective number of bits is 12.7, equivalent to a Scherier figure-of-merit of 165 dB.
A 10-bit 2.6-GS/s Time-Interleaved SAR ADC With a Digital-Mixing Timing-Skew Calibration Technique. A 16-channel time-interleaved 10-bit SAR analog-to-digital converter (ADC), employing the proposed delta-sampling auxiliary SAR ADCs and a digital-mixing calibration technique to compensate timing-skew error, achieves a 2.6-GS/s sampling rate. The ADC has been fabricated in a 40-nm CMOS technology and achieves a 50.6-dB signal-to-noise-and-distortion ratio at Nyquist rate while dissipating 18.4 mW...
A 12-bit 40-MS/s SAR ADC With a Fast-Binary-Window DAC Switching Scheme. This paper presents a 12-bit 40-MS/s successive approximation register analog-to-digital converter (ADC) for ultrasound imaging systems. By incorporating a fast binary window digital-to-analog converter (DAC) switching technique, the problematic most significant bit transition glitch was removed to improve linearity without increasing the input capacitance or using a calibration scheme. A hybrid D...
Unreliable failure detectors for reliable distributed systems We introduce the concept of unreliable failure detectors and study how they can be used to solve Consensus in asynchronous systems with crash failures. We characterise unreliable failure detectors in terms of two properties—completeness and accuracy. We show that Consensus can be solved even with unreliable failure detectors that make an infinite number of mistakes, and determine which ones can be used to solve Consensus despite any number of crashes, and which ones require a majority of correct processes. We prove that Consensus and Atomic Broadcast are reducible to each other in asynchronous systems with crash failures; thus, the above results also apply to Atomic Broadcast. A companion paper shows that one of the failure detectors introduced here is the weakest failure detector for solving Consensus [Chandra et al. 1992].
Tensor Decompositions and Applications This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or $N$-way array. Decompositions of higher-order tensors (i.e., $N$-way arrays with $N \geq 3$) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors.
Random Forests Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R. Schapire, Machine Learning: Proceedings of the Thirteenth International conference, &ast;&ast;&ast;, 148–156), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression.
Fundamental control algorithms in mobile networks In this work we propose simple and efficient protocols forcounting and leader election in mobile networks. For mobilenetworks with fixed base stations we provide a new andvery efficient protocol for counting the number of mobilehosts. The main part of the work concentrates on ad-hocnetworks (no fixed subnetwork). We provide a model forthese networks and leader election (and a special form ofcounting) protocols for both named and anonymous mobilehosts. In this work we define two protocol ...
Bundled execution of recurring traces for energy-efficient general purpose processing Technology scaling has delivered on its promises of increasing device density on a single chip. However, the voltage scaling trend has failed to keep up, introducing tight power constraints on manufactured parts. In such a scenario, there is a need to incorporate energy-efficient processing resources that can enable more computation within the same power budget. Energy efficiency solutions in the past have typically relied on application specific hardware and accelerators. Unfortunately, these approaches do not extend to general purpose applications due to their irregular and diverse code base. Towards this end, we propose BERET, an energy-efficient co-processor that can be configured to benefit a wide range of applications. Our approach identifies recurring instruction sequences as phases of "temporal regularity" in a program's execution, and maps suitable ones to the BERET hardware, a three-stage pipeline with a bundled execution model. This judicious off-loading of program execution to a reduced-complexity hardware demonstrates significant savings on instruction fetch, decode and register file accesses energy. On average, BERET reduces energy consumption by a factor of 3-4X for the program regions selected across a range of general-purpose and media applications. The average energy savings for the entire application run was 35% over a single-issue in-order processor.
A 93% efficiency reconfigurable switched-capacitor DC-DC converter using on-chip ferroelectric capacitors.
A 15.5 dB, wide signal swing, dynamic amplifier using a common-mode voltage detection technique This paper presents a high-speed, low-power and wide signal swing differential dynamic amplifier using a common-mode voltage detection technique. The proposed dynamic amplifier achieves a 15.5 dB gain with less than 1 dB drop over a signal swing of 1.3 Vpp at an operating frequency of 1.5 GHz with a VDD of 1.2 V in 90 nm CMOS. The power consumption of the proposed circuit can be reduced linearly with operating frequency lowering.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.11
0.1
0.1
0.1
0.05
0.01
0
0
0
0
0
0
0
0
Affine Transformed IT2 Fuzzy Event-Triggered Control Under Deception Attacks Stabilization of type-2 fuzzy system in the presence of cyber attacks is investigated in this article. For a practical application, a class of nonlinear system can be represented by an interval type-2 fuzzy system through a set of membership functions. Unlike existing schemes, 1) affine membership functions are considered in the controller design; moreover, 2) a robust adaptive event-triggered control is proposed to avoid the unwanted triggering events, which makes the proposed scheme more reliable and relaxes the conservativeness of stability analysis.In the numerical simulation, the mass-spring-damper system and the tracking control system are considered to illustrate the robustness and effectiveness of the proposed approach.
Controllability and Observability of a Well-Posed System Coupled With a Finite-Dimensional System We consider coupled systems consisting of a well-posed and strictly proper (hence regular) subsystem and a finite-dimensional subsystem connected in feedback. The external world interacts with the coupled system via the finite-dimensional part, which receives the external input and sends out the output. Under several assumptions, we derive well-posedness, regularity, exact (or approximate) controllability and exact (or approximate) observability results for such coupled systems.
Stabilization for a Coupled PDE-ODE Control System A control system of an ODE and a diffusion PDE is discussed in this paper. The novelty lies in that the system is coupled. The method of PDE backstepping as well as some special skills is resorted in stabilizing the coupled PDE–ODE control system, which is transformed into an exponentially stable PDE–ODE cascade with an invertible integral transformation. And a state feedback boundary controller is designed. Moreover, an exponentially convergent observer for anti-collocated setup is proposed, and the output feedback boundary control problem is solved. For both the state and output feedback boundary controllers, exponential stability analyses in the sense of the corresponding norms for the resulting closed-loop systems are given through rigid proofs.
Sampled-Data Fuzzy Control for Nonlinear Coupled Parabolic PDE-ODE Systems. In this paper, a sampled-data fuzzy control problem is addressed for a class of nonlinear coupled systems, which are described by a parabolic partial differential equation (PDE) and an ordinary differential equation (ODE). Initially, the nonlinear coupled system is accurately represented by the Takagi-Sugeno (T-S) fuzzy coupled parabolic PDE-ODE model. Then, based on the T-S fuzzy model, a novel t...
Sampled-Data Fuzzy Control With Guaranteed Cost for Nonlinear Parabolic PDE Systems via Static Output Feedback This article introduces a sampled-data (SD) static output feedback fuzzy control (FC) with guaranteed cost for nonlinear parabolic partial differential equation (PDE) systems. First, a Takagi–Sugeno (T–S) fuzzy parabolic PDE model is employed to represent the nonlinear PDE system. Second, with the aid of the T–S fuzzy PDE model, a SD FC design with guaranteed cost under spatially averaged measurements is developed in the formulation of linear matrix inequalities by utilizing a time-dependent Lyapunov functional and inequality techniques, which can stabilize exponentially the PDE system while providing an optimized upper bound on the cost function. The membership functions of the proposed controller are determined by the measurement output and independent of the fuzzy PDE plant model. Finally, simulation results are presented to control the diffusion equation and the FitzHugh–Nagumo equation for demonstrating the effectiveness of the proposed method.
A secure control framework for resource-limited adversaries. Cyber-secure networked control is modeled, analyzed, and experimentally illustrated in this paper. An attack space defined by the adversary’s model knowledge, disclosure, and disruption resources is introduced. Adversaries constrained by these resources are modeled for a networked control system architecture. It is shown that attack scenarios corresponding to denial-of-service, replay, zero-dynamics, and bias injection attacks on linear time-invariant systems can be analyzed using this framework. Furthermore, the attack policy for each scenario is described and the attack’s impact is characterized using the concept of safe sets. An experimental setup based on a quadruple-tank process controlled over a wireless network is used to illustrate the attack scenarios, their consequences, and potential counter-measures.
Stability Analysis of Positive Interval Type-2 TSK Systems With Application to Energy Markets Positive systems play an important role in many fields including biology, chemistry, and economics, among others. This paper discusses the stability of interval type-2 discrete-time positive Takagi-Sugeno-Kang (TSK) fuzzy systems. It discusses positive TSK systems and their nonzero equilibrium point. It then provides sufficient conditions for their exponential stability and instability. All the proposed stability and instability conditions can be tested using linear matrix inequalities. The stability and instability tests are demonstrated through application to a TSK model of the electric power market under a variety of market conditions.
Integrator backstepping control of a brush DC motor turning a robotic load In this paper, we design and implement integrator backstepping controllers (i.e., adaptive and robust) for a brush DC motor driving a one-link robot manipulator. Through the use of Lyapunov stability-type arguments, we show that both of these controllers ensure “good” load position tracking despite parametric uncertainty throughout the entire electromechanical system. Experimental results are presented to illustrate the performance and feasibility of implementing the nonlinear control algorithms
Fog computing and its role in the internet of things Fog Computing extends the Cloud Computing paradigm to the edge of the network, thus enabling a new breed of applications and services. Defining characteristics of the Fog are: a) Low latency and location awareness; b) Wide-spread geographical distribution; c) Mobility; d) Very large number of nodes, e) Predominant role of wireless access, f) Strong presence of streaming and real time applications, g) Heterogeneity. In this paper we argue that the above characteristics make the Fog the appropriate platform for a number of critical Internet of Things (IoT) services and applications, namely, Connected Vehicle, Smart Grid, Smart Cities, and, in general, Wireless Sensors and Actuators Networks (WSANs).
Directed diffusion for wireless sensor networking Advances in processor, memory, and radio technology will enable small and cheap nodes capable of sensing, communication, and computation. Networks of such nodes can coordinate to perform distributed sensing of environmental phenomena. In this paper, we explore the directed-diffusion paradigm for such coordination. Directed diffusion is data-centric in that all communication is for named data. All nodes in a directed-diffusion-based network are application aware. This enables diffusion to achieve energy savings by selecting empirically good paths and by caching and processing data in-network (e.g., data aggregation). We explore and evaluate the use of directed diffusion for a simple remote-surveillance sensor network analytically and experimentally. Our evaluation indicates that directed diffusion can achieve significant energy savings and can outperform idealized traditional schemes (e.g., omniscient multicast) under the investigated scenarios.
The evolution of hardware platforms for mobile 'software defined radio' terminals. The deployment of communication systems mainly depends on the availability of appropriate microelectronics. Therefore, the Fraunhofer-Institut fur Mikroelektronische Schaltungen und Systeme (IMS) considers the combined approach to communication and microelectronic system design as crucial. This paper explores the impact of anticipated communication services for future wireless communication systems on the evolution of microelectronics for wireless terminals. A roadmap is presented which predicts the hardware/software split of future software defined radio terminals (SDR terminals). Additionally, a new philosophy for analog and digital codesign is introduced, which may help to accelerate the appearance of mobile software defined radio terminals.
Interactive presentation: An FPGA based all-digital transmitter with radio frequency output for software defined radio In this paper, we present the architecture and implementation of an all-digital transmitter with radio frequency output targeting an FPGA device. FPGA devices have been widely adopted in the applications of digital signal processing (DSP) and digital communication. They are typically well suited for the evolving technology of software defined radios (SDR) due to their reconfigurability and programmability. However, FPGA devices are mostly used to implement digital baseband and intermediate frequency (IF) functionalities. Therefore, significant analog and RF components are still needed to fulfill the radio communication requirements. The all-digital transmitter presented in this paper directly synthesizes RF signal in the digital domain, therefore eliminates the need for most of the analog and RF components. The all-digital transmitter consists of one QAM modulator and one RF pulse width modulator (RFPWM). The binary output waveform from RFPWM is centered at 800MHz with 64QAM signaling format. The entire transmitter is implemented using Xilinx Virtex2pro device with on chip multi-gigabit transceiver (MGT). The adjacent channel leakage ratio (ACLR) measured in the 20 MHz passband is 45dB, and the measured error vector magnitude (EVM) is less than 1%. Our work extends the digital implementation of communication applications on an FPGA platform to radio frequency, therefore making a significant evolution towards an ideal SDR.
The real-time segmentation of indoor scene based on RGB-D sensor The vision system of the mobile robot is a low-level function that provides the required target information of the current environment for the upper vision tasks. The real-time performance and robustness of object segmentation in cluttered environments is still a serious problem in robot visions. In this paper, a new real-time indoor scene segmentation method based on RGB-D image, is presented and the extracted primary object regions are then used for object recognition. Firstly, this paper accomplishes the depth filtering by the improved traditional filtering method. Then by using improved depth information, the algorithm extracts the foreground and implements the object segmentation of color image at the resolution of 640×480 from Kinect camera. Finally, the segmentation results are applied into the object recognition in indoor scene to validate the effectiveness of scene segmentation. The results of indoor segmentation demonstrate the real-time performance and robustness of the proposed method. In addition, the results of segmentation improve the accuracy of object recognition and reduce time of object recognition in indoor cluttered scene.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.24
0.24
0.24
0.24
0.24
0.12
0.02
0.002222
0
0
0
0
0
0
A Wireless Power and Data Transfer IC for Neural Prostheses Using a Single Inductive Link With Frequency-Splitting Characteristic This paper presents a frequency-splitting-based wireless power and data transfer IC that simultaneously delivers power and forward data over a single inductive link. For data transmission, frequency-shift keying (FSK) is utilized because the FSK modulation scheme supports continuous wireless power transmission without disruption of the carrier amplitude. Moreover, the link that manifests the frequ...
Spike Sorting: The First Step in Decoding the Brain: The first step in decoding the brain In this article, we present an overview of the spike-sorting problem, its current solutions, and the challenges that remain. Because of the increasing demand for chronically implanted spike-sorting hardware, we will also discuss implementation considerations.
An Integrated Passive Phase-Shift Keying Modulator for Biomedical Implants With Power Telemetry Over a Single Inductive Link. This paper presents a passive phase-shift keying (PPSK) modulator for uplink data transmission for biomedical implants with simultaneous power and data transmission over a single 13.56 MHz inductive link. The PPSK modulator provides a data rate up to 1.35 Mbps with a modulation index between 3% and 38% for a variation of the coupling coefficient between 0.05 and 0.26. This modulation scheme is par...
Analysis and Design of a Robust, Low-Power, Inductively Coupled LSK Data Link A low-power half-duplex data link transmits data at up to 4 Mb/s over coupled inductors across distances of up to 5 cm. The inductors are part of two resonators which tune a free-running oscillator to one of their natural modes. This gives robustness to changes in coil distance and relative orientation. With load-shift keying (LSK) employed in the remote transponder, the link is modeled analytically to reveal some unexpected properties. A complete design guide aids selection of coil size for range, carrier frequency for data rate, and Q -switching to extend range. Owing to low power circuit design, the remote transponder consumes 0.075 pJ/b in transmit, 5 pJ/b in receive mode.
Design of a Bone-Guided Cochlear Implant Microsystem With Monopolar Biphasic Multiple Stimulations and Evoked Compound Action Potential Acquisition and Its In Vivo Verification A CMOS bone-guided cochlear implant (BGCI) microsystem is proposed and verified. In the implanted System on Chip (SoC) of the proposed BGCI, the evoked compound action potential (ECAP) acquisition and electrode–tissue impedance measurement (EAEIM) circuit is integrated to measure both ECAP and electrode–tissue impedance for clinical diagnoses. Both positive-/negative-voltage charge pumps and monop...
Feasibility Study on Active Back Telemetry and Power Transmission Through an Inductive Link for Millimeter-Sized Biomedical Implants. This paper presents a feasibility study of wireless power and data transmission through an inductive link to a 1-mm2 implant, to be used as a free-floating neural probe, distributed across a brain area of interest. The proposed structure utilizes a four-coil inductive link for back telemetry, shared with a three-coil link for wireless power transmission. We propose a design procedure for geometric...
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Measuring the Gap Between FPGAs and ASICs ABSTRACT This paper presents experimental measurements of the differences between a 90nm CMOS FPGA and 90nm CMOS Standard Cell ASICs in terms of logic density, circuit speed and power consumption. We are motivated to make these measurements to enable system designers to make better informed choices between these two media and to give insight to FPGA makers on the deciencies to attack and thereby improve FPGAs. In the paper, we describe the methodology by which the measurements were obtained and we show that, for circuits containing only combinational logic and,ipops, the ratio of silicon area required to implement them in FPGAs and ASICs is on average 40. Modern FPGAs also contain \hard" blocks such as multiplier/accumulators and block memories,and we nd,that these blocks reduce this average area gap signican tly to as little as 21. The ratio of critical path delay, from FPGA to ASIC, is roughly 3 to 4, with less inuence from block memory and hard multipliers. The dynamic power consumption ratio is approximately 12 times and, with hard blocks, this gap generally becomes smaller. Categories and Subject Descriptors
Language-based information-flow security Current standard security practices do not provide substantial assurance that the end-to-end behavior of a computing system satisfies important security policies such as confidentiality. An end-to-end confidentiality policy might assert that secret input data cannot be inferred by an attacker through the attacker's observations of system output; this policy regulates information flow. Conventional security mechanisms such as access control and encryption do not directly address the enforcement of information-flow policies. Previously, a promising new approach has been developed: the use of programming-language techniques for specifying and enforcing information-flow policies. In this paper, we survey the past three decades of research on information-flow security, particularly focusing on work that uses static program analysis to enforce information-flow policies. We give a structured view of work in the area and identify some important open challenges.
Fundamental control algorithms in mobile networks In this work we propose simple and efficient protocols forcounting and leader election in mobile networks. For mobilenetworks with fixed base stations we provide a new andvery efficient protocol for counting the number of mobilehosts. The main part of the work concentrates on ad-hocnetworks (no fixed subnetwork). We provide a model forthese networks and leader election (and a special form ofcounting) protocols for both named and anonymous mobilehosts. In this work we define two protocol ...
Bundled execution of recurring traces for energy-efficient general purpose processing Technology scaling has delivered on its promises of increasing device density on a single chip. However, the voltage scaling trend has failed to keep up, introducing tight power constraints on manufactured parts. In such a scenario, there is a need to incorporate energy-efficient processing resources that can enable more computation within the same power budget. Energy efficiency solutions in the past have typically relied on application specific hardware and accelerators. Unfortunately, these approaches do not extend to general purpose applications due to their irregular and diverse code base. Towards this end, we propose BERET, an energy-efficient co-processor that can be configured to benefit a wide range of applications. Our approach identifies recurring instruction sequences as phases of "temporal regularity" in a program's execution, and maps suitable ones to the BERET hardware, a three-stage pipeline with a bundled execution model. This judicious off-loading of program execution to a reduced-complexity hardware demonstrates significant savings on instruction fetch, decode and register file accesses energy. On average, BERET reduces energy consumption by a factor of 3-4X for the program regions selected across a range of general-purpose and media applications. The average energy savings for the entire application run was 35% over a single-issue in-order processor.
A 93% efficiency reconfigurable switched-capacitor DC-DC converter using on-chip ferroelectric capacitors.
A 10-Bit 800-MHz 19-mW CMOS ADC A pipelined ADC employs charge-steering op amps to relax the trade-offs among speed, noise, and power consumption. Applying full-rate nonlinearity and gain error calibration, a prototype realized in 65-nm CMOS technology achieves an SNDR of 52.2 dB at an input frequency of 399.2MHz and an FoM of 53 fJ/conversion-step.
A Data-Compressive Wired-OR Readout for Massively Parallel Neural Recording. Neural interfaces of the future will be used to help restore lost sensory, motor, and other capabilities. However, realizing this futuristic promise requires a major leap forward in how electronic devices interface with the nervous system. Next generation neural interfaces must support parallel recording from tens of thousands of electrodes within the form factor and power budget of a fully implan...
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
Blind Calibration of Timing Offsets for Four-Channel Time-Interleaved ADCs In this paper, we describe a blind calibration method for timing mismatches in a four-channel time-interleaved analog-to-digital converter (ADC). The proposed method requires that the input signal should be slightly oversampled. This ensures that there exists a frequency band around the zero frequency where the Fourier transforms of the four ADC subchannels contain only three alias components, ins...
Challenges in the design high-speed clock and data recovery circuits This article describes the challenges in the design of monolithic clock and data recovery circuits used in high-speed transceivers. Following an overview of general issues, the task of phase detection for random data is addressed. Next, Hogge (1985), Alexander (1975), and half-rate phase detectors are introduced and their trade-offs outlined. Finally, a number of clock and data recovery architectures are presented.
Adaptive blind compensation of gain and timing mismatches in M-channel time-interleaved ADCs Gain and timing mismatches among sub-converters limit the performance of time-interleaved analog-to-digital converters (TIADCs). In this paper we present a blind adaptive method, based on the least-mean-square (LMS) algorithm, to compensate gain and timing mismatches in TIADCs. Similar to other methods in the literature, we assume a slightly oversampled input signal, but, contrary to them, we can apply our method to an arbitrary number of channels in a straightforward way. We give a detailed description of the compensation and the identification part of the method and demonstrate its effectiveness through numerical simulations.
Adaptive Calibration of Channel Mismatches in Time-Interleaved ADCs Based on Equivalent Signal Recombination In this paper, we present an adaptive calibration method for correcting channel mismatches in time-interleaved analog-to-digital converters (TIADCs). An equivalent signal recombination structure is proposed to eliminate aliasing components when the input signal bandwidth is greater than the Nyquist bandwidth of the sub-ADCs in a TIADC. A band-limited pseudorandom noise sequence is employed as the desired output of the TIADC and simultaneously is also converted into an analog signal, which is injected into the TIADC as the training signal during the calibration process. Adaptive calibration filters with parallel structures are used to optimize the TIADC output to the desired output. The main advantage of the proposed method is that it avoids a complicated error estimation or measurement and largely reduces the computational complexity. A four-channel 12-bit 400-MHz TIADC and its calibration algorithm are implemented by hardware, and the measured spurious-free dynamic range is greater than 76 dB up 90% of the entire Nyquist band. The hardware implementation cost can be dramatically reduced, especially in instrumentation or measurement equipment applications, where special calibration phases and system stoppages are common.
Adaptive Blind Timing Mismatch Calibration with Low Power Consumption in M-Channel Time-Interleaved ADC. This paper proposes an adaptive blind calibration scheme to minimize timing mismatch in M-channel time-interleaved analog-to-digital converter (TIADC). By using a derivative filter, the timing mismatch can be calibrated with the coefficients estimated by calculating the average value of the relative timing errors for all sub-ADCs. The coefficients can be estimated by utilizing the filtered-X least-mean square algorithm. The main advantages of the proposed method are that the difficulty of the implementation and the power consumption can be reduced dramatically. Simulation results show the effectiveness of the proposed calibration technique. The design is synthesized on the TSMC-55 nm technology, and the estimated power consumption of digital part is about 4.27 mW with 1.2 V supply voltage. The measurement results of the FPGA validate system show that the proposed calibration can improve the SNR of a four-channel 400 MHz 14-b real TIADC system from 39.82 to 65.13 dB.
Improved Blind Timing Skew Estimation Based on Spectrum Sparsity and ApFFT in Time-Interleaved ADCs Timing skews among channels degrade seriously the time-interleaved analog-to-digital converter (TIADC) performance, which can be improved by the blind timing skew estimation (TSE) technique. In this paper, we proposed the all-phase fast Fourier transform (ApFFT) based on spectrum sparsity signal phase relationship blind TSE (ApFFT-SSPR-BLTSE) algorithm. The ApFFT-SSPR-BLTSE algorithm reduces compu...
All-Digital Calibration of Timing Mismatch Error in Time-Interleaved Analog-to-Digital Converters. This paper presents an all-digital background calibration for timing mismatch in time-interleaved analog-to-digital converters (TI-ADCs). It combines digital adaptive timing mismatch estimation and digital derivative-based correction, achieving lower hardware cost and better suppression of timing mismatch tones than previous work. In addition, for the first time closed-form exact expressions for t...
A 6b 3GS/s 11mW fully dynamic flash ADC in 40nm CMOS with reduced number of comparators A 6b 3GS/s fully dynamic flash ADC is fabricated in 40nm CMOS and occupies 0.021mm2. Dynamic comparators with digitally controlled built-in offset are realized with imbalanced tails. Half of the comparators are substituted with simple SR latches. The ADC achieves SNDRs of 36.2dB and 33.1dB at DC and Nyquist, respectively, while consuming 11mW from a 1.1V supply.
Direct bandpass sampling of multiple distinct RF signals A goal in the software radio design philosophy is to place the analog-to-digital converter as near the antenna as possible. This objective has been demonstrated for the case of a single input signal. Bandpass sampling has been applied to downconvert, or intentionally alias, the information bandwidth of a radio frequency (RF) signal to a desired intermediate frequency. The design of the software radio becomes more interesting when two or more distinct signals are received. The traditional ap- proach for multiple signals would be to bandpass sample a continuous span of spectrum containing all the desired signals. The disadvantage with this approach is that the sampling rate and associated discrete processing rate are based on the span of spectrum as opposed to the information bandwidths of the signals of interest. Proposed here is a technique to determine the absolute min- imum sampling frequency for direct digitization of multiple, nonadjacent, frequency bands. The entire process is based on the calculation of a single parameter—the sampling frequency. The result is a simple, yet elegant, front-end design for the reception and bandpass sampling of multiple RF signals. Experimental results using RF transmissions from the U.S. Global Positioning System—Standard Position Service (GPS-SPS) and the Russian Global Navigation Satellite System (GLONASS) are used to illustrate and verify the theory.
Standards for XML and Web Services Security XML schemas convey the data syntax and semantics for various application domains, such as business-to-business transactions, medical records, and production status reports. However, these schemas seldom address security issues, which can lead to a worst-case scenario of systems and protocols with no security at all. At best, they confine security to transport level mechanisms such as secure sockets layer (SSL). On the other hand, the omission of security provisions from domain schemas opens the way for generic security specifications based on XML document and grammar extensions. These specifications are orthogonal to domain schemas but integrate with them to support a variety of security objectives, such as confidentiality, integrity, and access control. In 2002, several specifications progressed toward providing a comprehensive standards framework for secure XML-based applications. The paper shows some of the most important specifications, the issues they address, and their dependencies.
Mobility Management Strategies in Heterogeneous Cognitive Radio Networks Considering the capacity gain of the secondary system and the capacity loss of the primary system caused by the newly accessing user, a distributed binary power allocation (admittance criterion) is proposed in dense cognitive networks including plentiful ...
On location observability notions for switching systems. The focus of this paper is on the analysis of initial discrete state distinguishability notions for switching systems, in a discrete time setting. Moreover, the relationship between initial discrete state distinguishability and the problem of reconstructing the current discrete state is addressed.
Formal Analysis of Leader Election in MANETs Using Real-Time Maude.
A Bidirectional Neural Interface IC With Chopper Stabilized BioADC Array and Charge Balanced Stimulator. We present a bidirectional neural interface with a 4-channel biopotential analog-to-digital converter (bioADC) and a 4-channel current-mode stimulator in 180 nm CMOS. The bioADC directly transduces microvolt biopotentials into a digital representation without a voltage-amplification stage. Each bioADC channel comprises a continuous-time first-order ΔΣ modulator with a chopper-stabilized OTA input ...
1.025949
0.024242
0.022468
0.019596
0.018182
0.018182
0.006061
0.000013
0
0
0
0
0
0
Graph-Based Spatio-Temporal Backpropagation for Training Spiking Neural Networks Dedicated hardware for spiking neural networks (SNN) reduces energy consumption with spike-driven computing. This paper proposes a graph-based spatio-temporal backpropagation (G-STBP) to train SNN, aiming to enhance spike sparsity for energy efficiency, while ensuring the accuracy. A differentiable leaky integrate-and-fire (LIF) model is suggested to establish the backpropagation path. The sparse ...
A 2.89 µW Dry-Electrode Enabled Clockless Wireless ECG SoC for Wearable Applications. This paper presents a fully integrated wireless electrocardiogram (ECG) SoC implemented in asynchronous architecture, which does not require system clock as well as off-chip antenna. Several low power techniques are proposed to minimize power consumption. At the system level, a newly introduced event-driven system architecture facilitates the asynchronous implementation, thus removes the system cl...
ECG-based Heartbeat Classification in Neuromorphic Hardware Heart activity can be monitored by means of ElectroCardioGram (ECG) measure which is widely used to detect heart diseases due to its non-invasive nature. Trained cardiologists can detect anomalies by visual inspecting recordings of the ECG signals. However, arrhythmias occur intermittently especially in early stages and therefore they can be missed in routine check recordings. We propose a hardware setup that enables the always-on monitoring of ECG signals into wearables. The system exploits a fully event-driven approach for carrying arrhythmia detection and classification employing a bio-inspired spiking neural network. The two staged Spiking Neural Network (SNN) topology comprises a recurrent network of spiking neurons whose output is classified by a cluster of Leaky integrate-and-fire (LIF) neurons that have been supervisely trained to distinguish 17 types of cardiac patterns. We introduce a method for compressing ECG signals into a stream of asynchronous digital events that are used to stimulate the recurrent SNN. Using ablative analysis, we demonstrate the impact of the recurrent SNN and we show an overall classification accuracy of 95% on the PhysioNet Arrhythmia Database provided by the Massachusetts Institute of Technology and Beth Israel Hospital (MIT/BIH). The proposed system has been implemented on an event-driven mixed-signal analog/digital neuromorphic processor. This work contributes to the realization of an energy-efficient, wearable, and accurate multi-class ECG classification system.
Energy Efficient Ecg Classification With Spiking Neural Network Heart disease is one of the top ten threats to global health in 2019 according to the WHO. Continuous monitoring of ECG on wearable devices can detect abnormality in the user's heartbeat early, thereby significantly increasing the chance of early intervention which is known to be the key to saving lives. In this paper, we present a set of inter-patient ECG classification methods that use convolutional (CNNs) and spiking neural networks (SNNs). We focused on inter-patient heartbeat classification, in which the model is trained over several patients and then used to infer that for patients not used in training. Raw heartbeat data is used in this paper because most wearable devices cannot deal with complex data preprocessing. A two-steps convolutional neural network testing method is proposed for saving power. For even greater energy-saving, a spiking neural network is also proposed. The latter is obtained from converting the trained CNN model with a less than one percent accuracy drop. The average power of a two-classes SNN is 0.077 W, or 0.0074x that of previously proposed neural network-based solutions.
Classification of Cardiac Arrhythmias Based on Artificial Neural Networks and Continuous-in-Time Discrete-in-Amplitude Signal Flow Conventional Artificial Neural Networks (ANNs) for classification of cardiac arrhythmias are based on Nyquist sampled electrocardiogram (ECG) signals. The uniform sampling scheme introduces large redundancy in the ANN, which results high power and large silicon area. To address these issues, we propose to use continuous-in-time discrete-in-amplitude (CTDA) sampling scheme as the input of the network. The CTDA sampling scheme significantly reduces the sample points on the baseline part while provides more detail on useful features in the ECG signal. It is shown that the CTDA sampling scheme achieves significant savings on arithmetic operations in the ANN while maintains the similar performance as Nyquist sampling in the classification. The proposed method is evaluated by MIT-BIH arrhythmia database following AAMI recommended practice.
A 13.34μW Event-driven Patient-specific ANN Cardiac Arrhythmia Classifier for Wearable ECG Sensors. Artificial neural network (ANN) and its variants are favored algorithm in designing cardiac arrhythmia classifier (CAC) for its high accuracy. However, the implementation of ultralow power ANN-CAC is challenging due to the intensive computations. Moreover, the imbalanced MIT-BIH database limits the ANN-CAC performance. Several novel techniques are proposed to address the challenges in the low power implementation. Firstly, continuous-in-time discrete-in-amplitude (CTDA) signal flow is adopted to reduce the multiplication operations. Secondly, conditional grouping scheme (CGS) in combination with biased training (BT) is proposed to handle the imbalanced training samples for better training convergency and evaluation accuracy. Thirdly, arithmetic unit sharing with customized high-performance multiplier improves the power efficiency. Verified in FPGA and synthesized in 0.18 μm CMOS process, the proposed CTDA ANN-CAC can classify an arrhythmia within 252 μs at 25 MHz clock frequency with average power of 13.34 μW for 75bpm heart rate. Evaluated on MIT-BIH database, it shows over 98% classification accuracy, 97% sensitivity, and 94% positive predictivity.
Tapestry: a resilient global-scale overlay for service deployment We present Tapestry, a peer-to-peer overlay routing infrastructure offering efficient, scalable, location-independent routing of messages directly to nearby copies of an object or service using only localized resources. Tapestry supports a generic decentralized object location and routing applications programming interface using a self-repairing, soft-state-based routing layer. The paper presents the Tapestry architecture, algorithms, and implementation. It explores the behavior of a Tapestry deployment on PlanetLab, a global testbed of approximately 100 machines. Experimental results show that Tapestry exhibits stable behavior and performance as an overlay, despite the instability of the underlying network layers. Several widely distributed applications have been implemented on Tapestry, illustrating its utility as a deployment infrastructure.
A Low-Power Fast-Transient 90-nm Low-Dropout Regulator With Multiple Small-Gain Stages A power-efficient 90-nm low-dropout regulator (LDO) with multiple small-gain stages is proposed in this paper. The proposed channel-resistance-insensitive small-gain stages provide loop gain enhancements without introducing low-frequency poles before the unity-gain frequency (UGF). As a result, both the loop gain and bandwidth of the LDO are improved, so that the accuracy and response speed of voltage regulation are significantly enhanced. As no on-chip compensation capacitor is required, the active chip area of the LDO is only 72.5 μm × 37.8 μm. Experimental results show that the LDO is capable of providing an output of 0.9 V with maximum output current of 50 mA from a 1-V supply. The LDO has a quiescent current of 9.3 μA, and has significantly improvement in line and load transient responses as well as performance in power-supply rejection ratio (PSRR).
Energy-Efficient Communication Protocol for Wireless Microsensor Networks Wireless distributed micro-sensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks.Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multihop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster base stations (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic net-works, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show that LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional routing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.
Type-2 Fuzzy Sets and Systems: An Overview [corrected reprint] As originally published in the February 2007 issue of IEEE Computational Intelligence Magazine, the above titled paper (ibid., vol. 2, no. 1, pp. 20-29, Feb 07) contained errors in mathematics that were introduced by the publisher. The corrected version is reprinted in its entirety.
Codejail: Application-Transparent Isolation of Libraries with Tight Program Interactions.
An Opportunistic Cognitive MAC Protocol for Coexistence with WLAN In last decades, the demand of wireless spectrum has increased rapidly with the development of mobile communication services. Recent studies recognize that traditional fixed spectrum assignment does not use spectrum efficiently. Such a wasting phenomenon could be amended after the present of cognitive radio. Cognitive radio is a new type of technology that enables secondary usage to unlicensed user. This paper presents an opportunistic cognitive MAC protocol (OC-MAC) for cognitive radios to access unoccupied resource opportunistically and coexist with wireless local area network (WLAN). By a primary traffic predication model and transmission etiquette, OC-MAC avoids producing fatal damage to licensed users. Then a ns2 simulation model is developed to evaluate its performance in scenarios with coexisting WLAN and cognitive network.
A 15.5 dB, wide signal swing, dynamic amplifier using a common-mode voltage detection technique This paper presents a high-speed, low-power and wide signal swing differential dynamic amplifier using a common-mode voltage detection technique. The proposed dynamic amplifier achieves a 15.5 dB gain with less than 1 dB drop over a signal swing of 1.3 Vpp at an operating frequency of 1.5 GHz with a VDD of 1.2 V in 90 nm CMOS. The power consumption of the proposed circuit can be reduced linearly with operating frequency lowering.
Neuropixels Data-Acquisition System: A Scalable Platform for Parallel Recording of 10,000+ Electrophysiological Signals. Although CMOS fabrication has enabled a quick evolution in the design of high-density neural probes and neural-recording chips, the scaling and miniaturization of the complete data-acquisition systems has happened at a slower pace. This is mainly due to the complexity and the many requirements that change depending on the specific experimental settings. In essence, the fundamental challenge of a n...
1.2
0.2
0.2
0.2
0.2
0.04
0
0
0
0
0
0
0
0
Assembly Of Long Error-Prone Reads Using De Bruijn Graphs The recent breakthroughs in assembling long error-prone reads were based on the overlap-layout-consensus (OLC) approach and did not utilize the strengths of the alternative de Bruijn graph approach to genome assembly. Moreover, these studies often assume that applications of the de Bruijn graph approach are limited to short and accurate reads and that the OLC approach is the only practical paradigm for assembling long error-prone reads. We show how to generalize de Bruijn graphs for assembling long error-prone reads and describe the ABruijn assembler, which combines the de Bruijn graph and the OLC approaches and results in accurate genome reconstructions.
TILE64 - Processor: A 64-Core SoC with Mesh Interconnect The TILE64TM processor is a multicore SoC targeting the high-performance demands of a wide range of embedded applications across networking and digital multimedia applications. A figure shows a block diagram with 64 tile processors arranged in an 8x8 array. These tiles connect through a scalable 2D mesh network with high-speed I/Os on the periphery. Each general-purpose processor is identical and capable of running SMP Linux.
Dynamic adaptive virtual core mapping to improve power, energy, and performance in multi-socket multicores Consider a multithreaded parallel application running inside a multicore virtual machine context that is itself hosted on a multi-socket multicore physical machine. How should the VMM map virtual cores to physical cores? We compare a local mapping, which compacts virtual cores to processor sockets, and an interleaved mapping, which spreads them over the sockets. Simply choosing between these two mappings exposes clear tradeoffs between performance, energy, and power. We then describe the design, implementation, and evaluation of a system that automatically and dynamically chooses between the two mappings. The system consists of a set of efficient online VMM-based mechanisms and policies that (a) capture the relevant characteristics of memory reference behavior, (b) provide a policy and mechanism for configuring the mapping of virtual machine cores to physical cores that optimizes for power, energy, or performance, and (c) drive dynamic migrations of virtual cores among local physical cores based on the workload and the currently specified objective. Using these techniques we demonstrate that the performance of SPEC and PARSEC benchmarks can be increased by as much as 66%, energy reduced by as much as 31%, and power reduced by as much as 17%, depending on the optimization objective.
Cache-Base Application Detection in the Cloud Using Machine Learning. Cross-VM attacks have emerged as a major threat on commercial clouds. These attacks commonly exploit hardware level leakages on shared physical servers. A co-located machine can readily feel the presence of a co-located instance with a heavy computational load through performance degradation due to contention on shared resources. Shared cache architectures such as the last level cache (LLC) have become a popular leakage source to mount cross-VM attack. By exploiting LLC leakages, researchers have already shown that it is possible to recover fine grain information such as cryptographic keys from popular software libraries. This makes it essential to verify implementations that handle sensitive data across the many versions and numerous target platforms, a task too complicated, error prone and costly to be handled by human beings. Here we propose a machine learning based technique to classify applications according to their cache access profiles. We show that with minimal and simple manual processing steps feature vectors can be used to train models using support vector machines to classify the applications with a high degree of success. The profiling and training steps are completely automated and do not require any inspection or study of the code to be classified. In native execution, we achieve a successful classification rate as high as 98% (L1 cache) and 78\\% (LLC) over 40 benchmark applications in the Phoronix suite with mild training. In the cross-VM setting on the noisy Amazon EC2 the success rate drops to 60\\% for a suite of 25 applications. With this initial study we demonstrate that it is possible to train meaningful models to successfully predict applications running in co-located instances.
NetCAT: Practical Cache Attacks from the Network Increased peripheral performance is causing strain on the memory subsystem of modern processors. For example, available DRAM throughput can no longer sustain the traffic of a modern network card. Scrambling to deliver the promised performance, instead of transferring peripheral data to and from DRAM, modern Intel processors perform I/O operations directly on the Last Level Cache (LLC). While Direct Cache Access (DCA) instead of Direct Memory Access (DMA) is a sensible performance optimization, it is unfortunately implemented without care for security, as the LLC is now shared between the CPU and all the attached devices, including the network card.In this paper, we reverse engineer the behavior of DCA, widely referred to as Data-Direct I/O (DDIO), on recent Intel processors and present its first security analysis. Based on our analysis, we present NetCAT, the first Network-based PRIME+PROBE Cache Attack on the processor's LLC of a remote machine. We show that NetCAT not only enables attacks in cooperative settings where an attacker can build a covert channel between a network client and a sandboxed server process (without network), but more worryingly, in general adversarial settings. In such settings, NetCAT can enable disclosure of network timing-based sensitive information. As an example, we show a keystroke timing attack on a victim SSH connection belonging to another client on the target server. Our results should caution processor vendors against unsupervised sharing of (additional) microarchitectural components with peripherals exposed to malicious input.
On introducing noise into the bus-contention channel. We explore two approaches to introducing noise intothe bus-contention channel: an existing approach calledfuzzy time, and a novel approach called probabilistic partitioning. We compare the two approaches in terms of the impact on covert channel capacity, the impact onperformance, the amount of random data needed, andtheir suitability for various applications. For probabilisticpartitioning, we obtain a precise tradeoff betweencovert channel capacity and performance.
Last-Level Cache Side-Channel Attacks are Practical We present an effective implementation of the Prime Probe side-channel attack against the last-level cache. We measure the capacity of the covert channel the attack creates and demonstrate a cross-core, cross-VM attack on multiple versions of GnuPG. Our technique achieves a high attack resolution without relying on weaknesses in the OS or virtual machine monitor or on sharing memory between attacker and victim.
The GPU Computing Era GPU computing is at a tipping point, becoming more widely used in demanding consumer applications and high-performance computing. This article describes the rapid evolution of GPU architectures—from graphics processors to massively parallel many-core multiprocessors, recent developments in GPU computing architectures, and how the enthusiastic adoption of CPU+GPU coprocessing is accelerating parallel applications.
Searching in an unknown environment: an optimal randomized algorithm for the cow-path problem Searching for a goal is a central and extensively studied problem in computer science. In classical searching problems, the cost of a search function is simply the number of queries made to an oracle that knows the position of the goal. In many robotics problems, as well as in problems from other areas, we want to charge a cost proportional to the distance between queries (e.g., the time required to travel between two query points). With this cost function in mind, the abstract problem known as the w -lane cow-path problem was designed. There are known optimal deterministic algorithms for the cow-path problem; we give the first randomized algorithm in this paper. We show that our algorithm is optimal for two paths ( w =2) and give evidence that it is optimal for larger values of w . Subsequent to the preliminary version of this paper, Kao et al. ( in “Proceedings, 5th ACM–SIAM Symposium on Discrete Algorithm,” pp. 372–381, 1994) have shown that our algorithm is indeed optimal for all w ⩾2. Our randomized algorithm gives expected performance that is almost twice as good as is possible with a deterministic algorithm. For the performance of our algorithm, we also derive the asymptotic growth with respect to w —despite similar complexity results for related problems, it appears that this growth has never been analyzed.
Adaptive Synchronization of an Uncertain Complex Dynamical Network This brief paper further investigates the locally and globally adaptive synchronization of an uncertain complex dynamical network. Several network synchronization criteria are deduced. Especially, our hypotheses and designed adaptive controllers for network synchronization are rather simple in form. It is very useful for future practical engineering design. Moreover, numerical simulations are also given to show the effectiveness of our synchronization approaches.
Cache Games -- Bringing Access-Based Cache Attacks on AES to Practice Side channel attacks on cryptographic systems exploit information gained from physical implementations rather than theoretical weaknesses of a scheme. In recent years, major achievements were made for the class of so called access-driven cache attacks. Such attacks exploit the leakage of the memory locations accessed by a victim process. In this paper we consider the AES block cipher and present an attack which is capable of recovering the full secret key in almost real time for AES-128, requiring only a very limited number of observed encryptions. Unlike previous attacks, we do not require any information about the plaintext (such as its distribution, etc.). Moreover, for the first time, we also show how the plaintext can be recovered without having access to the cipher text at all. It is the first working attack on AES implementations using compressed tables. There, no efficient techniques to identify the beginning of AES rounds is known, which is the fundamental assumption underlying previous attacks. We have a fully working implementation of our attack which is able to recover AES keys after observing as little as 100 encryptions. It works against the OpenS SL 0.9.8n implementation of AES on Linux systems. Our spy process does not require any special privileges beyond those of a standard Linux user. A contribution of probably independent interest is a denial of service attack on the task scheduler of current Linux systems (CFS), which allows one to observe (on average) every single memory access of a victim process.
A dynamically reconfigurable system-on-a-chip architecture for future mobile digital signal processing The evolving of current and future broadband access techniques into the wireless domain introduces new and flexible network architectures with difficult and interesting challenges. The system designers are faced with a challenging set of problems that stem from access mechanisms, energy conservation, error rate, transmission speed characteristics of the wireless links and mobility aspects. This paper presents first the major challenges in realizing flexible microelectronic system solutions for digital baseband signal processing in future mobile communication applications. Based thereupon, the architecture design of flexible system-on-a-chip solutions is discussed. The focus of the paper is the introduction of a new parallel and dynamically reconfigurable hardware architecture tailored to this application area. Its performance issues and potential are discussed by the implementation of a flexible and computation-intensive component of future mobile terminals.
Robust compensation of a chattering time-varying input delay We investigate the design of a prediction-based controller for a linear system subject to a time-varying input delay, not necessarily causal. This means that the information feeding the system can be older than ones previously received. We propose to use the current delay value in the prediction employed in the control law. Modeling the input delay as a transport Partial Differential Equation, we prove asymptotic tracking of the system state, providing that the average ℒ2-norm of the delay time-derivative is sufficiently small. This result is obtained by generalizing Halanay inequality to time-varying differential inequalities.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.2
0.2
0.2
0.2
0.2
0.066667
0.004762
0
0
0
0
0
0
0
A 9-Bit 150-MS/s Subrange ADC Based on SAR Architecture in 90-nm CMOS This paper presents a 9-bit subrange analog-to-digital converter (ADC) consisting of a 3.5-bit flash coarse ADC, a 6-bit successive-approximation-register (SAR) fine ADC, and a differential segmented capacitive digital-to-analog converter (DAC). The flash ADC controls the thermometer coarse capacitors of the DAC and the SAR ADC controls the binary fine ones. Both theoretical analysis and behavioral simulations show that the differential non-linearity (DNL) of a SAR ADC with a segmented DAC is better than that of a binary ADC. The merged switching of the coarse capacitors significantly enhances overall operation speed. At 150 MS/s, the ADC consumes 1.53 mW from a 1.2-V supply. The effective number of bits (ENOB) is 8.69 bits and the effective resolution bandwidth (ERBW) is 100 MHz. With a 1.3-V supply voltage, the sampling rate is 200 MS/s with 2.2-mW power consumption. The ENOB is 8.66 bits and the ERBW is 100 MHz. The FOMs at 1.3 V and 200 MS/s, 1.2 V and 150 MS/s and 1 V and 100 MS/s are 27.2, 24.7, and 17.7 fJ/conversion-step, respectively.
A 0.5-V 5.2-fJ/Conversion-Step Full Asynchronous SAR ADC With Leakage Power Reduction Down to 650 pW by Boosted Self-Power Gating in 40-nm CMOS. This paper presents an ultralow-power and ultralow-voltage SAR ADC. Full asynchronous operation and boosted self-power gating are proposed to improve conversion accuracy and reduce static leakage power. By designing with MOSFET of high threshold voltage (HVt) and low threshold voltage (LVt), the leakage power is reduced without decrease of maximum sampling frequency. The test chip in 40-nm CMOS pr...
A 10-b Ternary SAR ADC With Quantization Time Information Utilization. The design of a ternary successive approximation (TSAR) analog-to-digital converter (ADC) with quantization time information utilization is proposed. The TSAR examines the transient information of a typical dynamic SAR voltage comparator to provide accuracy, speed, and power benefits. Full half-bit redundancy is shown, allowing for residue shaping which provides an additional 6 dB of signal-to-qua...
A SAR-Assisted Two-Stage Pipeline ADC. Successive approximation register (SAR) ADC architectures are popular for achieving high energy efficiency but they suffer from resolution and speed limitations. On the other hand pipeline ADC architectures can achieve high resolution and speed but have lower energy-efficiency and are more complex. We pro pose a two-stage pipeline ADC architecture with a large first-stage resolution, enabled with ...
A 90-MS/s 11-MHz-Bandwidth 62-dB SNDR Noise-Shaping SAR ADC Although charge-redistribution successive approximation (SAR) ADCs are highly efficient, comparator noise and other effects limit the most efficient operation to below 10-b ENOB. This work introduces an oversampling, noise-shaping SAR ADC architecture that achieves 10-b ENOB with an 8-b SAR DAC array. A noise-shaping scheme shapes both comparator noise and quantization noise, thereby decoupling comparator noise from ADC performance. The loop filter is comprised of a cascade of a two-tap charge-domain FIR filter and an integrator to achieve good noise shaping even with a low-quality integrator. The prototype ADC is fabricated in 65-nm CMOS and occupies a core area of 0.03 mm2. Operating at 90 MS/s, it consumes 806 μW from a 1.2-V supply.
A 12-Bit 10 MS/s SAR ADC With High Linearity and Energy-Efficient Switching. A 12-bit 10 MS/s SAR ADC with enhanced linearity and energy efficiency is presented in this paper. A novel switching scheme (COSS) is proposed to reduce the power consumption and the matching requirement for capacitors in SAR ADCs. The switching energy (including switching energy and reset energy), total capacitance and static performance (INL & DNL) of the proposed scheme are reduced by 98.08%, 7...
A new class of asynchronous A/D converters based on time quantization This work is a contribution to a drastic change in standard signal processing chains. The main objective is to reduce the power consumption by one or two orders of magnitude. Integrated Smart Devices and Communicating Objects are application domains targeted by this work. In this context, we present a new class of Analog-to-Digital Converters (ADCs), based on an irregular sampling of the analog signal, and an asynchronous design. Because they are not conventional, a complete design methodology is presented. It determines their characteristics given the required effective number of bits and the analog signal properties. it is shown that our approach leads to a significant reduction in terms of hardware complexity and power consumption. A prototype has been designed for speech applications, using the STMicroelectronics 0.18-μm CMOS technology. Electrical simulations prove that the factor of merit is increased by more than one order of magnitude compared to synchronous Nyquist ADCs.
Jitter and phase noise in ring oscillators A companion analysis of clock jitter and phase noise of single-ended and differential ring oscillators is presented. The impulse sensitivity functions are used to derive expressions for the jitter and phase noise of ring oscillators. The effect of the number of stages, power dissipation, frequency of oscillation, and short- channel effects on the jitter and phase noise of ring oscillators is analyzed. Jitter and phase noise due to substrate and supply noise is discussed, and the effect of symmetry on the upconversion of 1/ noise is demonstrated. Several new design insights are given for low jitter/phase-noise design. Good agreement between theory and measurements is observed. UE to their integrated nature, ring oscillators have be- come an essential building block in many digital and communication systems. They are used as voltage-controlled oscillators (VCO's) in applications such as clock recovery circuits for serial data communications (1)-(4), disk-drive read channels (5), (6), on-chip clock distribution (7)-(10), and integrated frequency synthesizers (10), (11). Although they have not found many applications in radio frequency (RF), they can be used for some low-tier RF systems. Recently, there has been some work on modeling jitter and phase noise in ring oscillators. References (12) and (13) develop models for the clock jitter based on time-domain treatments for MOS and bipolar differential ring oscillators, respectively. Reference (14) proposes a frequency-domain approach to find the phase noise based on an linear time- invariant model for differential ring oscillators with a small number of stages. In this paper, we develop a parallel treatment of frequency- domain phase noise (15) and time-domain clock jitter for ring oscillators. We apply the phase-noise model presented in (16) to obtain general expressions for jitter and phase noise of the ring oscillators. The next section briefly reviews the phase-noise model presented in (16). In Section III, we apply the model to timing jitter and develop an expression for the timing jitter of oscilla- tors, while Section IV provides the derivation of a closed-form expression to calculate the rms value of the impulse sensitivity function (ISF). Section V introduces expressions for jitter and phase noise in single-ended and differential ring oscillators
Leveraging on-chip voltage regulators as a countermeasure against side-channel attacks Side-channel attacks have become a significant threat to the integrated circuit security. Circuit level techniques are proposed in this paper as a countermeasure against side-channel attacks. A distributed on-chip power delivery system consisting of multi-level switched capacitor (SC) voltage converters is proposed where the individual interleaved stages are turned on and turned off either based on the workload information or pseudo-randomly to scramble the power consumption profile. In the case that the changes in the workload demand do not trigger the power delivery system to turn on or off individual stages, the active stages are reshuffled with so called converter-reshuffling to insert random spikes in the power consumption profile. An entropy based metric is developed to evaluate the security-performance of the proposed converter-reshuffling technique as compared to three other existing on-chip power delivery schemes. The increase in the power trace entropy with CoRe scheme is also demonstrated with simulation results to further verify the theoretical analysis.
Design-oriented estimation of thermal noise in switched-capacitor circuits. Thermal noise represents a major limitation on the performance of most electronic circuits. It is particularly important in switched circuits, such as the switched-capacitor (SC) filters widely used in mixed-mode CMOS integrated circuits. In these circuits, switching introduces a boost in the power spectral density of the thermal noise due to aliasing. Unfortunately, even though the theory of nois...
Dynamic sensor collaboration via sequential Monte Carlo We consider the application of sequential Monte Carlo (SMC) methods for Bayesian inference to the problem of information-driven dynamic sensor collaboration in clutter environments for sensor networks. The dynamics of the system under consideration are described by nonlinear sensing models within randomly deployed sensor nodes. The exact solution to this problem is prohibitively complex due to the nonlinear nature of the system. The SMC methods are, therefore, employed to track the probabilistic dynamics of the system and to make the corresponding Bayesian estimates and predictions. To meet the specific requirements inherent in sensor network, such as low-power consumption and collaborative information processing, we propose a novel SMC solution that makes use of the auxiliary particle filter technique for data fusion at densely deployed sensor nodes, and the collapsed kernel representation of the a posteriori distribution for information exchange between sensor nodes. Furthermore, an efficient numerical method is proposed for approximating the entropy-based information utility in sensor selection. It is seen that under the SMC framework, the optimal sensor selection and collaboration can be implemented naturally, and significant improvement is achieved over existing methods in terms of localizing and tracking accuracies.
Design and Analysis of a Class-D Stage With Harmonic Suppression. This paper presents the design and analysis of a low-power Class-D stage in 90 nm CMOS featuring a harmonic suppression technique, which cancels the 3rd harmonic by shaping the output voltage waveform. Only digital circuits are used and the short-circuit current present in Class-D inverter-based output stages is eliminated, relaxing the buffer requirements. Using buffers with reduced drive strengt...
Current-mode adaptively hysteretic control for buck converters with fast transient response and improved output regulation This paper presents a current-mode adaptively hysteretic control (CMAHC) technique to achieve the fast transient response for DC-DC buck converters. A complementary full range current sensor comprising of chargingpath and discharging-path sensing transistors is proposed to track the inductor current seamlessly. With the proposed current-mode adaptively hysteretic topology, the inductor current is continuously monitored, and the adaptively hysteretic threshold is dynamically adjusted according to the feedback information comes from the output voltage level. Therefore, a fast load-transient response can be achieved. Besides, the output regulation performance is also improved by the proposed dynamic current-scaling circuitry (DCSC). Moreover, the proposed CMAHC topology can be used in a nearly zero RESR design configuration. The prototype fabricated using TSMC 0.25μm CMOS process occupies the area of 1.78mm2 including all bonding pads. Experimental results show that the output voltage ripple is smaller than 30mV over a wide loading current from 0 mA to 500 mA with maximum power conversion efficiency higher than 90%. The recovery time from light to heavy load (100 to 500 mA) is smaller than 5μs.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.101281
0.102562
0.051281
0.034187
0.017177
0.00125
0.000161
0.00001
0
0
0
0
0
0
A Sizing Methodology for On-Chip Switched-Capacitor DC/DC Converters This paper proposes a systematic sizing methodology for switched-capacitor DC/DC converters aimed at maximizing the converter efficiency under the die area constraint. To do so, we propose first an analytical solution of the optimum switching frequency to maximize the converter efficiency. When the parasitic capacitances are low, this solution leads to an identical contribution of the switches and transfer capacitors to the converter output impedance. As the parasitic capacitances increase, the optimum switching frequency decreases. Secondly, optimum capacitor and switch sizes for maximum efficiency are provided. We show that the overdrive voltage strongly impacts the optimum switch width through the modification of their conductance. To support the sizing methodology, a model of the efficiency of switched-capacitor DC/DC converters is developed. It is validated against simulation and measurement results in 65 nm and 0.13 μm CMOS, respectively. The proposed sizing methodology shows how the converter efficiency can be traded-off for die area reduction and what is the impact of parasitic capacitances on the converter sizing.
A 1-V-Input Switched-Capacitor Voltage Converter With Voltage-Reference-Free Pulse-Density Modulation. A 1-V-input 0.45-V-output switched-capacitor (SC) voltage converter with voltage-reference-free pulse-density modulation (VRF-PDM) is proposed. The all-digital VRF-PDM scheme improves the efficiency from 17% to 73% at 50- μA output current by reducing the pulse density and eliminating the voltage reference circuit. An output voltage trimming by the hot-carrier injection to a comparator and a perio...
Conductance Modulation Techniques in Switched-Capacitor DC-DC Converter for Maximum-Efficiency Tracking and Ripple Mitigation in 22 nm Tri-Gate CMOS Switch conductance modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, in 22 nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switch-size scaling scheme for maximum efficiency tracking across a wide range of voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures and, (ii) a simple active ripple mitigation technique that modulates the gate drive of select MOSFET switches effectively in all conversion modes. Efficiency improvements up to 15% are measured under low output voltage and load conditions. Load-independent output ripple of $ 50 mV is achieved, enabling reduced interleaving. Test chip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
Scalable Parasitic Charge Redistribution: Design of High-Efficiency Fully Integrated Switched-Capacitor DC-DC Converters. This paper introduces a technique, called scalable parasitic charge redistribution (SPCR), that reduces the parasitic bottom-plate losses in fully integrated switched-capacitor (SC) voltage regulators up to any desired level. This is realized by continuously redistributing the parasitic charge in-between phase-shifted converter cores. Because earlier models described the ratio of this parasitic co...
Automotive Switched-Capacitor DC–DC Converter With High BW Power Mirror and Dual Supply Driver This paper presents circuit topologies and implementations for SC DC-DC converters with controlled charging current. It focuses on the power current mirror that regulates the charging current of the flying capacitor and on the switch drivers. The bandwidth and transient response of the power mirror are improved by inserting an auxiliary current mirror in its input signal path. Conventional switch ...
Design Strategy for Step-Up Charge Pumps With Variable Integer Conversion Ratios Method in identifying all possible configurations of 2-phase charge pumps giving an integer conversion ratio with a fixed number of flying capacitors is presented. A systematic strategy is proposed to design an integrated charge pump as an example with a variable gain of 6times and 7times in a standard 0.35-mum CMOS process using only 4 flying capacitors. Conduction loss is considered and minimized. Measurement results verified the validity of the design methodology
20.3 A feedforward controlled on-chip switched-capacitor voltage regulator delivering 10W in 32nm SOI CMOS On-chip (or fully integrated) switched-capacitor (SC) voltage regulators (SCVR) have recently received a lot of attention due to their ease of monolithic integration. The use of deep trench capacitors can lead to SCVR implementations that simultaneously achieve high efficiency, high power density, and fast response time. For the application of granular power distribution of many-core microprocessor systems, the on-chip SCVR must maintain an output voltage above a certain minimum level Uout, min in order for the microprocessor core to meet setup time requirements. Following a transient load change, the output voltage typically exhibits a droop due to parasitic inductances and resistances in the power distribution network. Therefore, the steady-state output voltage is kept high enough to ensure VOUT >Vout, min at all times, thereby introducing an output voltage overhead that leads to increased system power consumption. The output voltage droop can be reduced by implementing fast regulation and a sufficient amount of on-chip decoupling capacitance. However, a large amount of on-chip decoupling capacitance is needed to significantly reduce the droop, and it becomes impractical to implement owing to the large chip area overhead required. This paper presents a feedforward control scheme that significantly reduces the output voltage droop in the presence of a large input voltage droop following a transient event. This in turn reduces the required output voltage overhead and may lead to significant overall system power savings.
Analysis and Design Strategy of On-Chip Charge Pumps for Micro-power Energy Harvesting Applications.
Randomized algorithms This text by two well-known experts in the field presents the basic concepts in the design and analysis of randomized algorithms at a level accessible to beginning graduate students, professionals and researchers.
Scalable video coding and transport over broadband wireless networks With the emergence of broadband wireless networks and increasing demand of multimedia information on the Internet, wireless multimedia services are foreseen to become widely deployed in the next decade. Real-time video transmission typically has requirements on quality of service (QoS). However, wireless channels are unreliable and the channel bandwidth varies with time, which may cause severe deg...
The Interdomain Connectivity of PlanetLab Nodes In this paper we investigate the interdomain connectivity of PlanetLab nodes. We note that about 85 percent of the hosts are located within what we call the Global Research and Educational Network (GREN) - an interconnected network of high speed research networks such as Internet2 in the USA and Dante in Europe. Since traffic with source and destination on the GREN is very likely to be transited solely by the GREN, this means that over 70 percent of the end-to-end measurements between PlanetLab node pairs represent measurements of GREN characteristics. We suggest that it may be possible to systematically choose the placement of new nodes so that as the PlanetLab platform grows it becomes a closer and closer approximation to the Global Internet.
Fpga Implementation Of High-Frequency Software Radio Receiver State-of-the-art analog-to-digital converters allow the design of high-frequency software radio receivers that use baseband signal processing. However, such receivers are rarely considered in literature. In this paper, we describe the design of a high-performance receiver operating at high frequencies, whose digital part is entirely implemented in an FPGA device. The design of digital subsystem is given, together with the design of a low-cost analog front end.
A control engineering perspective to radio resource management challenges in emerging cellular/“noncellular” radio systems The technological evolution of the wireless cellular systems has been very rapid in last two decades. In the coming decade of “converging wireless networks/systems/ecosystems”, there is an increasing demand on achieving very high data rates ubiquitously even with high mobile speeds as if we connected to a wired ADSL! Radio Resource Management (RRM) for the emerging wireless systems will be the key mechanism for achieving such high data rates. Indeed, RRM has already been a hot research area in both academia and industry for decades. And due to the complexity of the emerging wireless systems, an interdisciplinary approach and/or methodology is needed to tackle the new RRM challenges. In this paper, we provide a control engineering view onto some of the RRM challenges in emerging wireless networks, with a special emphasis on distributed power control. For example, we establish a link between power control design and dynamic neural networks, two different areas whose scope of interest, motivations and settings are completely different. Here, we emphasize the importance and the need of interdisciplinary approach. Some subjects to be addressed within the paper shall include future-generation cellular/“noncellular” systems, radio resource management challenges, energy efficiency and distributed power control algorithms, variable-structure-systems based power control, channel/frequency allocation, spectral-clustering based channel allocation, Hopfield neural networks.
Power Efficiency Comparison of Event-Driven and Fixed-Rate Signal Conversion and Compression for Biomedical Applications Energy-constrained biomedical recording systems need power-efficient data converters and good signal compression in order to meet the stringent power consumption requirements of many applications. In literature today, typically a SAR ADC in combination with digital compression is used. Recently, alternative event-driven sampling techniques have been proposed that incorporate compression in the ADC, such as level-crossing A/D conversion. This paper describes the power efficiency analysis of such level-crossing ADC (LCADC) and the traditional fixed-rate SAR ADC with simple compression. A model for the power consumption of the LCADC is derived, which is then compared to the power consumption of the SAR ADC with zero-order hold (ZOH) compression for multiple biosignals (ECG, EMG, EEG, and EAP). The LCADC is more power efficient than the SAR ADC up to a cross-over point in quantizer resolution (for example 8 bits for an EEG signal). This cross-over point decreases with the ratio of the maximum to average slope in the signal of the application. It also changes with the technology and design techniques used. The LCADC is thus suited for low to medium resolution applications. In addition, the event-driven operation of an LCADC results in fewer data to be transmitted in a system application. The event-driven LCADC without timer and with single-bit quantizer achieves a reduction in power consumption at system level of two orders of magnitude, an order of magnitude better than the SAR ADC with ZOH compression. At system level, the LCADC thus offers a big advantage over the SAR ADC.
1.102723
0.1
0.1
0.1
0.1
0.034511
0.001871
0.000269
0
0
0
0
0
0
BRein Memory: A Single-Chip Binary/Ternary Reconfigurable in-Memory Deep Neural Network Accelerator Achieving 1.4 TOPS at 0.6 W. A versatile reconfigurable accelerator architecture for binary/ternary deep neural networks is presented. In-memory neural network processing without any external data accesses, sustained by the symmetry and simplicity of the computation of the binary/ternaty neural network, improves the energy efficiency dramatically. The prototype chip is fabricated, and it achieves 1.4 TOPS (tera operations per...
SRAM-Based In-Memory Computing Macro Featuring Voltage-Mode Accumulator and Row-by-Row ADC for Processing Neural Networks This paper presents a mixed-signal SRAM-based in-memory computing (IMC) macro for processing binarized neural networks. The IMC macro consists of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$128\times 128$ </tex-math></inline-formula> (16K) SRAM-based bitcells. Each bitcell consists of a standard 6T SRAM bitcell, an XNOR-based binary multiplier, and a pseudo-differential voltage-mode driver (i.e., an accumulator unit). Multiply-and-accumulate (MAC) operations between 64 pairs of inputs and weights (stored in the first 64 SRAM bitcells) are performed in 128 rows of the macro, all in parallel. A weight-stationary architecture, which minimizes off-chip memory accesses, effectively reduces energy-hungry data communications. A row-by-row analog-to-digital converter (ADC) based on 32 replica bitcells and a sense amplifier reduces the ADC area overhead and compensates for nonlinearity and variation. The ADC converts the MAC result from each row to an N-bit digital output taking 2 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">N</sup> -1 cycles per conversion by sweeping the reference level of 32 replica bitcells. The remaining 32 replica bitcells in the row are utilized for offset calibration. In addition, this paper presents a pseudo-differential voltage-mode accumulator to address issues in the current-mode or single-ended voltage-mode accumulator. A test chip including a 16Kbit SRAM IMC bitcell array is fabricated using a 65nm CMOS technology. The measured energy- and area-efficiency is 741-87TOPS/W with 1-5bit ADC at 0.5V supply and 3.97TOPS/mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> , respectively.
An Always-On 3.8 <inline-formula> <tex-math notation="LaTeX">$\mu$ </tex-math></inline-formula>J/86% CIFAR-10 Mixed-Signal Binary CNN Processor With All Memory on Chip in 28-nm CMOS The trend of pushing inference from cloud to edge due to concerns of latency, bandwidth, and privacy has created demand for energy-efficient neural network hardware. This paper presents a mixed-signal binary convolutional neural network (CNN) processor for always-on inference applications that achieves 3.8 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{J}$ </tex-math></inline-formula> /classification at 86% accuracy on the CIFAR-10 image classification data set. The goal of this paper is to establish the minimum-energy point for the representative CIFAR-10 inference task, using the available design tradeoffs. The BinaryNet algorithm for training neural networks with weights and activations constrained to +1 and −1 drastically simplifies multiplications to XNOR and allows integrating all memory on-chip. A weight-stationary, data-parallel architecture with input reuse amortizes memory access across many computations, leaving wide vector summation as the remaining energy bottleneck. This design features an energy-efficient switched-capacitor (SC) neuron that addresses this challenge, employing a 1024-bit thermometer-coded capacitive digital-to-analog converter (CDAC) section for summing pointwise products of CNN filter weights and activations and a 9-bit binary-weighted section for adding the filter bias. The design occupies 6 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> in 28-nm CMOS, contains 328 kB of on-chip SRAM, operates at 237 frames/s (FPS), and consumes 0.9 mW from 0.6 V/0.8 V supplies. The corresponding energy per classification (3.8 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{J}$ </tex-math></inline-formula> ) amounts to a 40 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\times $ </tex-math></inline-formula> improvement over the previous low-energy benchmark on CIFAR-10, achieved in part by sacrificing some programmability. The SC neuron array is 12.9 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\times $ </tex-math></inline-formula> more energy efficient than a synthesized digital implementation, which amounts to a 4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\times $ </tex-math></inline-formula> advantage in system-level energy per classification.
A Neuromorphic Chip Optimized for Deep Learning and CMOS Technology With Time-Domain Analog and Digital Mixed-Signal Processing. Demand for highly energy-efficient coprocessor for the inference computation of deep neural networks is increasing. We propose the time-domain neural network (TDNN), which employs time-domain analog and digital mixed-signal processing (TDAMS) that uses delay time as the analog signal. TDNN not only exploits energy-efficient analog computing, but also enables fully spatially unrolled architecture b...
A 7-nm Compute-in-Memory SRAM Macro Supporting Multi-Bit Input, Weight and Output and Achieving 351 TOPS/W and 372.4 GOPS In this work, we present a compute-in-memory (CIM) macro built around a standard two-port compiler macro using foundry 8T bit-cell in 7-nm FinFET technology. The proposed design supports 1024 4 b <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\times $ </tex-math></inline-formula> 4 b multiply-and-accumulate (MAC) computations simultaneously. The 4-bit input is represented by the number of read word-line (RWL) pulses, while the 4-bit weight is realized by charge sharing among binary-weighted computation caps. Each unit of computation cap is formed by the inherent cap of the sense amplifier (SA) inside the 4-bit Flash ADC, which saves area and minimizes kick-back effect. Access time is 5.5 ns with 0.8-V power supply at room temperature. The proposed design achieves energy efficiency of 351 TOPS/W and throughput of 372.4 GOPS. Implications of our design from neural network implementation and accuracy perspectives are also discussed.
CAP-RAM: A Charge-Domain In-Memory Computing 6T-SRAM for Accurate and Precision-Programmable CNN Inference A compact, accurate, and bitwidth-programmable in-memory computing (IMC) static random-access memory (SRAM) macro, named CAP-RAM, is presented for energy-efficient convolutional neural network (CNN) inference. It leverages a novel charge-domain multiply-and-accumulate (MAC) mechanism and circuitry to achieve superior linearity under process variations compared to conventional IMC designs. The adopted semi-parallel architecture efficiently stores filters from multiple CNN layers by sharing eight standard 6T SRAM cells with one charge-domain MAC circuit. Moreover, up to six levels of bit-width of weights with two encoding schemes and eight levels of input activations are supported. A 7-bit charge-injection SAR (ciSAR) analog-to-digital converter (ADC) getting rid of sample and hold (S&H) and input/reference buffers further improves the overall energy efficiency and throughput. A 65-nm prototype validates the excellent linearity and computing accuracy of CAP-RAM. A single 512×128 macro stores a complete pruned and quantized CNN model to achieve 98.8% inference accuracy on the MNIST data set and 89.0% on the CIFAR-10 data set, with a 573.4-giga operations per second (GOPS) peak throughput and a 49.4-tera operations per second (TOPS)/W energy efficiency.
A Logic-in-Memory Computer If, as presently projected, the cost of microelectronic arrays in the future will tend to reflect the number of pins on the array rather than the number of gates, the logic-in-memory array is an extremely attractive computer component. Such an array is essentially a microelectronic memory with some combinational logic associated with each storage element. A logic-in-memory computer is described that is organized around a logic-enhanced ``cache'' memory array. Used as a cache, a logic-in-memory array performs as a high-speed buffer between a conventional CPU and a conventional memory. The effect on the computer system of the cache and its control mechanism is to make the main memory appear to have all of the processing capabilities and almost the same performance as the cache. Operations within the array are naturally organized as operations on blocks of data called ``sectors.'' Among the operations that can be performed are arithmetic and logical operations on pairs of elements from two sectors, and a variety of associative search operations on a single sector. For such operations, the main memory of the computer appears to the program to be composed of a collection of logic-in-memory arrays, each the size of a sector. Because of the high-speed, highly parallel sector operations, the logic-in-memory computer points to a new direction for achieving orders of magnitude increase in computer performance. Moreover, since the computer is specifically organized for large-scale integration, the increased performance might be obtained for a comparatively small dollar cost.
MAGIC—Memristor-Aided Logic Memristors are passive components with a varying resistance that depends on the previous voltage applied across the device. While memristors are naturally used as memory, memristors can also be used for other applications, including logic circuits. In this brief, a memristor-only logic family, i.e., memristor-aided logic (MAGIC), is presented. In each MAGIC logic gate, memristors serve as an input with previously stored data, and an additional memristor serves as an output. The topology of a MAGIC nor gate is similar to the structure of a common memristor-based crossbar memory array. A MAGIC nor gate can therefore be placed within memory, providing opportunities for novel non-von Neumann computer architectures. Other MAGIC gates also exist (e.g., and, or, not, and nand) and are described in this brief.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
Understanding churn in peer-to-peer networks The dynamics of peer participation, or churn, are an inherent property of Peer-to-Peer (P2P) systems and critical for design and evaluation. Accurately characterizing churn requires precise and unbiased information about the arrival and departure of peers, which is challenging to acquire. Prior studies show that peer participation is highly dynamic but with conflicting characteristics. Therefore, churn remains poorly understood, despite its significance.In this paper, we identify several common pitfalls that lead to measurement error. We carefully address these difficulties and present a detailed study using three widely-deployed P2P systems: an unstructured file-sharing system (Gnutella), a content-distribution system (BitTorrent), and a Distributed Hash Table (Kad). Our analysis reveals several properties of churn: (i) overall dynamics are surprisingly similar across different systems, (ii) session lengths are not exponential, (iii) a large portion of active peers are highly stable while the remaining peers turn over quickly, and (iv) peer session lengths across consecutive appearances are correlated. In summary, this paper advances our understanding of churn by improving accuracy, comparing different P2P file sharingdistribution systems, and exploring new aspects of churn.
Noise in current-commutating passive FET mixers Noise in the mixer of zero-IF receivers can compromise the overall receiver sensitivity. The evolution of a passive CMOS mixer based on the knowledge of the physical mechanisms of noise in an active mixer is explained. Qualitative physical models that simply explain the frequency translation of both the flicker and white noise of different FETs in the mixer have been developed. Derived equations have been verified by simulations, and mixer optimization has been explained.
1-5.6 Gb/s CMOS clock and data recovery IC with a static phase offset compensated linear phase detector This study presents a 1-5.6 Gb/s CMOS clock and data recovery (CDR) integrated circuit (IC) implemented in a 0.13 μm CMOS process. The CDR uses a half-rate linear phase detector (PD) of which static phase offset is compensated by an additional binary PD and a digital charge pump (CP) calibration block. During initialisation, the static phase offset is detected by the binary PD and the CP current is controlled accordingly to compensate the static phase offset. Also, the architecture of this CDR IC is designed for a clock embedded serial data interface which transfers CDR training clock patterns before normal random data signals. The implemented IC consumes 16-22 mA from a 1.2 V core supply for data rates of 1-5.6 Gb/s and 20 mA from a 3.3 V I/O supply for two preamplifiers and low-voltage differential signalling drivers. When the 231-1 pseudorandom binary sequence is used, the measured bit-error rate is better than 10-12 and the jitter tolerance is 0.3UIpp. The recovered clock jitter is 21.6 and 4.2 psrms for 1 and 5.6 Gb/s data rates, respectively.
Armature Reaction Field and Inductance of Coreless Moving-Coil Tubular Linear Machine Analysis of armature reaction field and inductance is extremely important for design and control implementation of electromagnetic machines. So far, most studies have focused on magnetic field generated by permanent-magnet (PM) poles, whereas less work has been done on armature reaction field. This paper proposes a novel analytical modeling method to predict the armature reaction field of a coreless PM tubular linear machine with dual Halbach array. Unlike conventional modeling approach, the proposed method formulates the armature reaction field for electromagnetic machines with finite length, so that the analytical modeling precision can be improved. In addition, winding inductance is also analytically formulated to facilitate dynamic motion control based on the reaction field solutions. Numerical result is subsequently obtained with finite-element method and employed to validate the derived analytical models. A research prototype with dual Halbach array and single phase input is developed. Experiments are conducted on the reaction field and inductance to further verify the obtained mathematical models.
A Sub- $\mu$ W Reconfigurable Front-End for Invasive Neural Recording That Exploits the Spectral Characteristics of the Wideband Neural Signal This paper presents a sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> ac-coupled reconfigurable front-end for invasive wideband neural signal recording. The proposed topology embeds filtering capabilities enabling the selection of different frequency bands inside the neural signal spectrum. Power consumption is optimized by defining specific noise targets for each sub-band. These targets take into account the spectral characteristics of wideband neural signals: local field potentials (LFP) exhibit <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mathrm {1/f^{x}}$ </tex-math></inline-formula> magnitude scaling while action potentials (AP) show uniform magnitude across frequency. Additionally, noise targets also consider electrode noise and the spectral distribution of noise sources in the circuit. An experimentally verified prototype designed in a standard 180 nm CMOS process draws 815 nW from a 1 V supply. The front-end is able to select among four different frequency bands (modes) up to 5 kHz. The measured input-referred spot-noise at 500 Hz in the LFP mode (1 Hz - 700 Hz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$55~nV/\sqrt {Hz}$ </tex-math></inline-formula> while the integrated noise in the AP mode (200 Hz - 5 kHz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$4.1~\mu Vrms$ </tex-math></inline-formula> . The proposed front-end achieves sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> operation without penalizing other specifications such as input swing, common-mode or power-supply rejection ratios. It reduces the power consumption of neural front-ends with spectral selectivity by <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$6.1\times $ </tex-math></inline-formula> and, compared with conventional wideband front-ends, it obtains a reduction of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.5\times $ </tex-math></inline-formula> .
1.053361
0.05
0.040861
0.03
0.025
0.016667
0.003533
0.000256
0
0
0
0
0
0
On Integrating Radio, Computing, and Application Resource Management in Cognitive Radio Systems Cognitive radio is an emerging concept that facilitates the intelligent usage of radio resources in heterogeneous radio environments. It automates the reconfiguration of software-defined radio (SDR) platforms, which stand for reconfigurable mobile terminals and network elements. This paper introduces a novel approach to resource management in cognitive radio. We call it integrated resource management (IRM) because it integrates the radio resource or spectrum management, the computing resource management of SDR platforms, and the application resource management of SDR applications, which define a platform's radio functionality. Our cognitive radio system thus executes three cognitive cycles: the radio, the computing, and the application cycles. We present a general framework that facilitates the
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Architecture Design of Reconfigurable Pipelined Datapaths This paper examines reconfigurable pipelined datapaths (RaPiDs), a new architecture style for computation-intensive applications that bridges the cost/performance gap between general purpose and application specific architectures. RaPiDs can provide significantly higher performance than general purpose processors on a wide range of applications from the areas of video and signal processing, scientific computing, and communications. Moreover, RaPiDs provide the flexibility that doesn't come with application-specific architectures.A RaPiD architecture is optimized for highly repetitive, computationally-intensive tasks. Very deep application-specific computation pipelines can be configured that deliver very high performance for a wide range of applications. RaPiDs achieve this using a coarse-grained reconfigurable architecture that mixes the appropriate amount of static configuration with dynamic control.We describe the fundamental features of a RaPiD architecture, including the linear array of functional units, a programmable segmented bus structure, and a programmable control architecture. In addition, we outline the floorplan of the architecture and provide timing data for the most critical paths. We conclude with performance numbers for several applications on an instance of a RaPiD architecture.
A detailed power model for field-programmable gate arrays Power has become a critical issue for field-programmable gate array (FPGA) vendors. Understanding the power dissipation within FPGAs is the first step in developing power-efficient architectures and computer-aided design (CAD) tools for FPGAs. This article describes a detailed and flexible power model which has been integrated in the widely used Versatile Place and Route (VPR) CAD tool. This power model estimates the dynamic, short-circuit, and leakage power consumed by FPGAs. It is the first flexible power model developed to evaluate architectural tradeoffs and the efficiency of power-aware CAD tools for a variety of FPGA architectures, and is freely available for noncommercial use. The model is flexible, in that it can estimate the power for a wide variety of FPGA architectures, and it is fast, in that it does not require extensive simulation, meaning it can be used to explore a large architectural space. We show how the model can be used to investigate the impact of various architectural parameters on the energy consumed by the FPGA, focusing on the segment length, switch block topology, lookuptable size, and cluster size.
Flexible Circuits and Architectures for Ultralow Power Subthreshold digital circuits minimize energy per operation and are thus ideal for ultralow-power (ULP) applications with low performance requirements. However, a large range of ULP applications continue to face performance constraints at certain times that exceed the capabilities of subthreshold operation. In this paper, we give two different examples to show that designing flexibility into ULP systems across the architecture and circuit levels can meet both the ULP requirements and the performance demands. Specifically, we first present a method that expands on ultradynamic voltage scaling (UDVS) to combine multiple supply voltages with component level power switches to provide more efficient operation at any energy-delay point and low overhead switching between points. This system supports operation across the space from maximum performance, when necessary, to minimum energy, when possible. It thus combines the benefits of single-V DD, multi-V DD, and dynamic voltage scaling (DVS) while improving on them all. Second, we propose that reconfigurable subthreshold circuits can increase applicability for ULP embedded systems. Since ULP devices conventionally require custom circuit design but the manufacturing volume for many ULP applications is low, a subthreshold field programmable gate array (FPGA) offers a cost-effective custom solution with hardware flexibility that makes it applicable across a wide range of applications. We describe the design of a subthreshold FPGA to support ULP operation and identify key challenges to this effort.
On the evolution of user interaction in Facebook Online social networks have become extremely popular; numerous sites allow users to interact and share content using social links. Users of these networks often establish hundreds to even thousands of social links with other users. Recently, researchers have suggested examining the activity network - a network that is based on the actual interaction between users, rather than mere friendship - to distinguish between strong and weak links. While initial studies have led to insights on how an activity network is structurally different from the social network itself, a natural and important aspect of the activity network has been disregarded: the fact that over time social links can grow stronger or weaker. In this paper, we study the evolution of activity between users in the Facebook social network to capture this notion. We find that links in the activity network tend to come and go rapidly over time, and the strength of ties exhibits a general decreasing trend of activity as the social network link ages. For example, only 30% of Facebook user pairs interact consistently from one month to the next. Interestingly, we also find that even though the links of the activity network change rapidly over time, many graph-theoretic properties of the activity network remain unchanged.
Tensor-matrix products with a compressed sparse tensor The Canonical Polyadic Decomposition (CPD) of tensors is a powerful tool for analyzing multi-way data and is used extensively to analyze very large and extremely sparse datasets. The bottleneck of computing the CPD is multiplying a sparse tensor by several dense matrices. Algorithms for tensor-matrix products fall into two classes. The first class saves floating point operations by storing a compressed tensor for each dimension of the data. These methods are fast but suffer high memory costs. The second class uses a single uncompressed tensor at the cost of additional floating point operations. In this work, we bridge the gap between the two approaches and introduce the compressed sparse fiber (CSF) a data structure for sparse tensors along with a novel parallel algorithm for tensor-matrix multiplication. CSF offers similar operation reductions as existing compressed methods while using only a single tensor structure. We validate our contributions with experiments comparing against state-of-the-art methods on a diverse set of datasets. Our work uses 58% less memory than the state-of-the-art while achieving 81% of the parallel performance on 16 threads.
Hidden factors and hidden topics: understanding rating dimensions with review text In order to recommend products to users we must ultimately predict how a user will respond to a new product. To do so we must uncover the implicit tastes of each user as well as the properties of each product. For example, in order to predict whether a user will enjoy Harry Potter, it helps to identify that the book is about wizards, as well as the user's level of interest in wizardry. User feedback is required to discover these latent product and user dimensions. Such feedback often comes in the form of a numeric rating accompanied by review text. However, traditional methods often discard review text, which makes user and product latent dimensions difficult to interpret, since they ignore the very text that justifies a user's rating. In this paper, we aim to combine latent rating dimensions (such as those of latent-factor recommender systems) with latent review topics (such as those learned by topic models like LDA). Our approach has several advantages. Firstly, we obtain highly interpretable textual labels for latent rating dimensions, which helps us to `justify' ratings with text. Secondly, our approach more accurately predicts product ratings by harnessing the information present in review text; this is especially true for new products and users, who may have too few ratings to model their latent factors, yet may still provide substantial information from the text of even a single review. Thirdly, our discovered topics can be used to facilitate other tasks such as automated genre discovery, and to identify useful and representative reviews.
Capstan: A Vector RDA for Sparsity ABSTRACT This paper proposes Capstan: a scalable, parallel-patterns-based, reconfigurable dataflow accelerator (RDA) for sparse and dense tensor applications. Instead of designing for one application, we start with common sparse data formats, each of which supports multiple applications. Using a declarative programming model, Capstan supports application-independent sparse iteration and memory primitives that can be mapped to vectorized, high-performance hardware. We optimize random-access sparse memories with configurable out-of-order execution to increase SRAM random-access throughput from 32% to 80%. For a variety of sparse applications, Capstan with DDR4 memory is 18× faster than a multi-core CPU baseline, while Capstan with HBM2 memory is 16× faster than an Nvidia V100 GPU. For sparse applications that can be mapped to Plasticine, a recent dense RDA, Capstan is 7.6× to 365× faster and only 16% larger.
Scale-out acceleration for machine learning. The growing scale and complexity of Machine Learning (ML) algorithms has resulted in prevalent use of distributed general-purpose systems. In a rather disjoint effort, the community is focusing mostly on high performance single-node accelerators for learning. This work bridges these two paradigms and offers CoSMIC, a full computing stack constituting language, compiler, system software, template architecture, and circuit generators, that enable programmable acceleration of learning at scale. CoSMIC enables programmers to exploit scale-out acceleration using FPGAs and Programmable ASICs (P-ASICs) from a high-level and mathematical Domain-Specific Language (DSL). Nonetheless, CoSMIC does not require programmers to delve into the onerous task of system software development or hardware design. CoSMIC achieves three conflicting objectives of efficiency, automation, and programmability, by integrating a novel multi-threaded template accelerator architecture and a cohesive stack that generates the hardware and software code from its high-level DSL. CoSMIC can accelerate a wide range of learning algorithms that are most commonly trained using parallel variants of gradient descent. The key is to distribute partial gradient calculations of the learning algorithms across the accelerator-augmented nodes of the scale-out system. Additionally, CoSMIC leverages the parallelizability of the algorithms to offer multi-threaded acceleration within each node. Multi-threading allows CoSMIC to efficiently exploit the numerous resources that are becoming available on modern FPGAs/P-ASICs by striking a balance between multi-threaded parallelism and single-threaded performance. CoSMIC takes advantage of algorithmic properties of ML to offer a specialized system software that optimizes task allocation, role-assignment, thread management, and internode communication. We evaluate the versatility and efficiency of CoSMIC for 10 different machine learning applications from various domains. On average, a 16-node CoSMIC with UltraScale+ FPGAs offers 18.8× speedup over a 16-node Spark system with Xeon processors while the programmer only writes 22--55 lines of code. CoSMIC offers higher scalability compared to the state-of-the-art Spark; scaling from 4 to 16 nodes with CoSMIC yields 2.7× improvements whereas Spark offers 1.8×. These results confirm that the full-stack approach of CoSMIC takes an effective and vital step towards enabling scale-out acceleration for machine learning.
Deep learning Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech. Machine-learning technology powers many aspects of modern society: from web searches to content filtering on social networks to recommendations on e-commerce websites, and it is increasingly present in consumer products such as cameras and smartphones. Machine-learning systems are used to identify objects in images, transcribe speech into text, match news items, posts or products with users' interests, and select relevant results of search. Increasingly, these applications make use of a class of techniques called deep learning. Conventional machine-learning techniques were limited in their ability to process natural data in their raw form. For decades, constructing a pattern-recognition or machine-learning system required careful engineering and considerable domain expertise to design a feature extractor that transformed the raw data (such as the pixel values of an image) into a suitable internal representation or feature vector from which the learning subsystem, often a classifier, could detect or classify patterns in the input. Representation learning is a set of methods that allows a machine to be fed with raw data and to automatically discover the representations needed for detection or classification. Deep-learning methods are representation-learning methods with multiple levels of representation, obtained by composing simple but non-linear modules that each transform the representation at one level (starting with the raw input) into a representation at a higher, slightly more abstract level. With the composition of enough such transformations, very complex functions can be learned. For classification tasks, higher layers of representation amplify aspects of the input that are important for discrimination and suppress irrelevant variations. An image, for example, comes in the form of an array of pixel values, and the learned features in the first layer of representation typically represent the presence or absence of edges at particular orientations and locations in the image. The second layer typically detects motifs by spotting particular arrangements of edges, regardless of small variations in the edge positions. The third layer may assemble motifs into larger combinations that correspond to parts of familiar objects, and subsequent layers would detect objects as combinations of these parts. The key aspect of deep learning is that these layers of features are not designed by human engineers: they are learned from data using a general-purpose learning procedure. Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years. It has turned out to be very good at discovering intricate structures in high-dimensional data and is therefore applicable to many domains of science, business and government. In addition to beating records in image recognition1, 2, 3, 4 and speech recognition5, 6, 7, it has beaten other machine-learning techniques at predicting the activity of potential drug molecules8, analysing particle accelerator data9, 10, reconstructing brain circuits11, and predicting the effects of mutations in non-coding DNA on gene expression and disease12, 13. Perhaps more surprisingly, deep learning has produced extremely promising results for various tasks in natural language understanding14, particularly topic classification, sentiment analysis, question answering15 and language translation16, 17. We think that deep learning will have many more successes in the near future because it requires very little engineering by hand, so it can easily take advantage of increases in the amount of available computation and data. New learning algorithms and architectures that are currently being developed for deep neural networks will only accelerate this progress. The most common form of machine learning, deep or not, is supervised learning. Imagine that we want to build a system that can classify images as containing, say, a house, a car, a person or a pet. We first collect a large data set of images of houses, cars, people and pets, each labelled with its category. During training, the machine is shown an image and produces an output in the form of a vector of scores, one for each category. We want the desired category to have the highest score of all categories, but this is unlikely to happen before training. We compute an objective function that measures the error (or distance) between the output scores and the desired pattern of scores. The machine then modifies its internal adjustable parameters to reduce this error. These adjustable parameters, often called weights, are real numbers that can be seen as 'knobs' that define the input–output function of the machine. In a typical deep-learning system, there may be hundreds of millions of these adjustable weights, and hundreds of millions of labelled examples with which to train the machine. To properly adjust the weight vector, the learning algorithm computes a gradient vector that, for each weight, indicates by what amount the error would increase or decrease if the weight were increased by a tiny amount. The weight vector is then adjusted in the opposite direction to the gradient vector. The objective function, averaged over all the training examples, can be seen as a kind of hilly landscape in the high-dimensional space of weight values. The negative gradient vector indicates the direction of steepest descent in this landscape, taking it closer to a minimum, where the output error is low on average. In practice, most practitioners use a procedure called stochastic gradient descent (SGD). This consists of showing the input vector for a few examples, computing the outputs and the errors, computing the average gradient for those examples, and adjusting the weights accordingly. The process is repeated for many small sets of examples from the training set until the average of the objective function stops decreasing. It is called stochastic because each small set of examples gives a noisy estimate of the average gradient over all examples. This simple procedure usually finds a good set of weights surprisingly quickly when compared with far more elaborate optimization techniques18. After training, the performance of the system is measured on a different set of examples called a test set. This serves to test the generalization ability of the machine — its ability to produce sensible answers on new inputs that it has never seen during training. Many of the current practical applications of machine learning use linear classifiers on top of hand-engineered features. A two-class linear classifier computes a weighted sum of the feature vector components. If the weighted sum is above a threshold, the input is classified as belonging to a particular category. Since the 1960s we have known that linear classifiers can only carve their input space into very simple regions, namely half-spaces separated by a hyperplane19. But problems such as image and speech recognition require the input–output function to be insensitive to irrelevant variations of the input, such as variations in position, orientation or illumination of an object, or variations in the pitch or accent of speech, while being very sensitive to particular minute variations (for example, the difference between a white wolf and a breed of wolf-like white dog called a Samoyed). At the pixel level, images of two Samoyeds in different poses and in different environments may be very different from each other, whereas two images of a Samoyed and a wolf in the same position and on similar backgrounds may be very similar to each other. A linear classifier, or any other 'shallow' classifier operating on raw pixels could not possibly distinguish the latter two, while putting the former two in the same category. This is why shallow classifiers require a good feature extractor that solves the selectivity–invariance dilemma — one that produces representations that are selective to the aspects of the image that are important for discrimination, but that are invariant to irrelevant aspects such as the pose of the animal. To make classifiers more powerful, one can use generic non-linear features, as with kernel methods20, but generic features such as those arising with the Gaussian kernel do not allow the learner to generalize well far from the training examples21. The conventional option is to hand design good feature extractors, which requires a considerable amount of engineering skill and domain expertise. But this can all be avoided if good features can be learned automatically using a general-purpose learning procedure. This is the key advantage of deep learning. A deep-learning architecture is a multilayer stack of simple modules, all (or most) of which are subject to learning, and many of which compute non-linear input–output mappings. Each module in the stack transforms its input to increase both the selectivity and the invariance of the representation. With multiple non-linear layers, say a depth of 5 to 20, a system can implement extremely intricate functions of its inputs that are simultaneously sensitive to minute details — distinguishing Samoyeds from white wolves — and insensitive to large irrelevant variations such as the background, pose, lighting and surrounding objects. From the earliest days of pattern recognition22, 23, the aim of researchers has been to replace hand-engineered features with trainable multilayer networks, but despite its simplicity, the solution was not widely understood until the mid 1980s. As it turns out, multilayer architectures can be trained by simple stochastic gradient descent. As long as the modules are relatively smooth functions of their inputs and of their internal weights, one can compute gradients using the backpropagation procedure. The idea that this could be done, and that it worked, was discovered independently by several different groups during the 1970s and 1980s24, 25, 26, 27. The backpropagation procedure to compute the gradient of an objective function with respect to the weights of a multilayer stack of modules is nothing more than a practical application of the chain rule for derivatives. The key insight is that the derivative (or gradient) of the objective with respect to the input of a module can be computed by working backwards from the gradient with respect to the output of that module (or the input of the subsequent module) (Fig. 1). The backpropagation equation can be applied repeatedly to propagate gradients through all modules, starting from the output at the top (where the network produces its prediction) all the way to the bottom (where the external input is fed). Once these gradients have been computed, it is straightforward to compute the gradients with respect to the weights of each module. Many applications of deep learning use feedforward neural network architectures (Fig. 1), which learn to map a fixed-size input (for example, an image) to a fixed-size output (for example, a probability for each of several categories). To go from one layer to the next, a set of units compute a weighted sum of their inputs from the previous layer and pass the result through a non-linear function. At present, the most popular non-linear function is the rectified linear unit (ReLU), which is simply the half-wave rectifier f(z) = max(z, 0). In past decades, neural nets used smoother non-linearities, such as tanh(z) or 1/(1 + exp(−z)), but the ReLU typically learns much faster in networks with many layers, allowing training of a deep supervised network without unsupervised pre-training28. Units that are not in the input or output layer are conventionally called hidden units. The hidden layers can be seen as distorting the input in a non-linear way so that categories become linearly separable by the last layer (Fig. 1). In the late 1990s, neural nets and backpropagation were largely forsaken by the machine-learning community and ignored by the computer-vision and speech-recognition communities. It was widely thought that learning useful, multistage, feature extractors with little prior knowledge was infeasible. In particular, it was commonly thought that simple gradient descent would get trapped in poor local minima — weight configurations for which no small change would reduce the average error. In practice, poor local minima are rarely a problem with large networks. Regardless of the initial conditions, the system nearly always reaches solutions of very similar quality. Recent theoretical and empirical results strongly suggest that local minima are not a serious issue in general. Instead, the landscape is packed with a combinatorially large number of saddle points where the gradient is zero, and the surface curves up in most dimensions and curves down in the remainder29, 30. The analysis seems to show that saddle points with only a few downward curving directions are present in very large numbers, but almost all of them have very similar values of the objective function. Hence, it does not much matter which of these saddle points the algorithm gets stuck at. Interest in deep feedforward networks was revived around 2006 (refs 31,32,33,34) by a group of researchers brought together by the Canadian Institute for Advanced Research (CIFAR). The researchers introduced unsupervised learning procedures that could create layers of feature detectors without requiring labelled data. The objective in learning each layer of feature detectors was to be able to reconstruct or model the activities of feature detectors (or raw inputs) in the layer below. By 'pre-training' several layers of progressively more complex feature detectors using this reconstruction objective, the weights of a deep network could be initialized to sensible values. A final layer of output units could then be added to the top of the network and the whole deep system could be fine-tuned using standard backpropagation33, 34, 35. This worked remarkably well for recognizing handwritten digits or for detecting pedestrians, especially when the amount of labelled data was very limited36. The first major application of this pre-training approach was in speech recognition, and it was made possible by the advent of fast graphics processing units (GPUs) that were convenient to program37 and allowed researchers to train networks 10 or 20 times faster. In 2009, the approach was used to map short temporal windows of coefficients extracted from a sound wave to a set of probabilities for the various fragments of speech that might be represented by the frame in the centre of the window. It achieved record-breaking results on a standard speech recognition benchmark that used a small vocabulary38 and was quickly developed to give record-breaking results on a large vocabulary task39. By 2012, versions of the deep net from 2009 were being developed by many of the major speech groups6 and were already being deployed in Android phones. For smaller data sets, unsupervised pre-training helps to prevent overfitting40, leading to significantly better generalization when the number of labelled examples is small, or in a transfer setting where we have lots of examples for some 'source' tasks but very few for some 'target' tasks. Once deep learning had been rehabilitated, it turned out that the pre-training stage was only needed for small data sets. There was, however, one particular type of deep, feedforward network that was much easier to train and generalized much better than networks with full connectivity between adjacent layers. This was the convolutional neural network (ConvNet)41, 42. It achieved many practical successes during the period when neural networks were out of favour and it has recently been widely adopted by the computer-vision community. ConvNets are designed to process data that come in the form of multiple arrays, for example a colour image composed of three 2D arrays containing pixel intensities in the three colour channels. Many data modalities are in the form of multiple arrays: 1D for signals and sequences, including language; 2D for images or audio spectrograms; and 3D for video or volumetric images. There are four key ideas behind ConvNets that take advantage of the properties of natural signals: local connections, shared weights, pooling and the use of many layers. The architecture of a typical ConvNet (Fig. 2) is structured as a series of stages. The first few stages are composed of two types of layers: convolutional layers and pooling layers. Units in a convolutional layer are organized in feature maps, within which each unit is connected to local patches in the feature maps of the previous layer through a set of weights called a filter bank. The result of this local weighted sum is then passed through a non-linearity such as a ReLU. All units in a feature map share the same filter bank. Different feature maps in a layer use different filter banks. The reason for this architecture is twofold. First, in array data such as images, local groups of values are often highly correlated, forming distinctive local motifs that are easily detected. Second, the local statistics of images and other signals are invariant to location. In other words, if a motif can appear in one part of the image, it could appear anywhere, hence the idea of units at different locations sharing the same weights and detecting the same pattern in different parts of the array. Mathematically, the filtering operation performed by a feature map is a discrete convolution, hence the name. Although the role of the convolutional layer is to detect local conjunctions of features from the previous layer, the role of the pooling layer is to merge semantically similar features into one. Because the relative positions of the features forming a motif can vary somewhat, reliably detecting the motif can be done by coarse-graining the position of each feature. A typical pooling unit computes the maximum of a local patch of units in one feature map (or in a few feature maps). Neighbouring pooling units take input from patches that are shifted by more than one row or column, thereby reducing the dimension of the representation and creating an invariance to small shifts and distortions. Two or three stages of convolution, non-linearity and pooling are stacked, followed by more convolutional and fully-connected layers. Backpropagating gradients through a ConvNet is as simple as through a regular deep network, allowing all the weights in all the filter banks to be trained. Deep neural networks exploit the property that many natural signals are compositional hierarchies, in which higher-level features are obtained by composing lower-level ones. In images, local combinations of edges form motifs, motifs assemble into parts, and parts form objects. Similar hierarchies exist in speech and text from sounds to phones, phonemes, syllables, words and sentences. The pooling allows representations to vary very little when elements in the previous layer vary in position and appearance. The convolutional and pooling layers in ConvNets are directly inspired by the classic notions of simple cells and complex cells in visual neuroscience43, and the overall architecture is reminiscent of the LGN–V1–V2–V4–IT hierarchy in the visual cortex ventral pathway44. When ConvNet models and monkeys are shown the same picture, the activations of high-level units in the ConvNet explains half of the variance of random sets of 160 neurons in the monkey's inferotemporal cortex45. ConvNets have their roots in the neocognitron46, the architecture of which was somewhat similar, but did not have an end-to-end supervised-learning algorithm such as backpropagation. A primitive 1D ConvNet called a time-delay neural net was used for the recognition of phonemes and simple words47, 48. There have been numerous applications of convolutional networks going back to the early 1990s, starting with time-delay neural networks for speech recognition47 and document reading42. The document reading system used a ConvNet trained jointly with a probabilistic model that implemented language constraints. By the late 1990s this system was reading over 10% of all the cheques in the United States. A number of ConvNet-based optical character recognition and handwriting recognition systems were later deployed by Microsoft49. ConvNets were also experimented with in the early 1990s for object detection in natural images, including faces and hands50, 51, and for face recognition52. Since the early 2000s, ConvNets have been applied with great success to the detection, segmentation and recognition of objects and regions in images. These were all tasks in which labelled data was relatively abundant, such as traffic sign recognition53, the segmentation of biological images54 particularly for connectomics55, and the detection of faces, text, pedestrians and human bodies in natural images36, 50, 51, 56, 57, 58. A major recent practical success of ConvNets is face recognition59. Importantly, images can be labelled at the pixel level, which will have applications in technology, including autonomous mobile robots and self-driving cars60, 61. Companies such as Mobileye and NVIDIA are using such ConvNet-based methods in their upcoming vision systems for cars. Other applications gaining importance involve natural language understanding14 and speech recognition7. Despite these successes, ConvNets were largely forsaken by the mainstream computer-vision and machine-learning communities until the ImageNet competition in 2012. When deep convolutional networks were applied to a data set of about a million images from the web that contained 1,000 different classes, they achieved spectacular results, almost halving the error rates of the best competing approaches1. This success came from the efficient use of GPUs, ReLUs, a new regularization technique called dropout62, and techniques to generate more training examples by deforming the existing ones. This success has brought about a revolution in computer vision; ConvNets are now the dominant approach for almost all recognition and detection tasks4, 58, 59, 63, 64, 65 and approach human performance on some tasks. A recent stunning demonstration combines ConvNets and recurrent net modules for the generation of image captions (Fig. 3). Recent ConvNet architectures have 10 to 20 layers of ReLUs, hundreds of millions of weights, and billions of connections between units. Whereas training such large networks could have taken weeks only two years ago, progress in hardware, software and algorithm parallelization have reduced training times to a few hours. The performance of ConvNet-based vision systems has caused most major technology companies, including Google, Facebook, Microsoft, IBM, Yahoo!, Twitter and Adobe, as well as a quickly growing number of start-ups to initiate research and development projects and to deploy ConvNet-based image understanding products and services. ConvNets are easily amenable to efficient hardware implementations in chips or field-programmable gate arrays66, 67. A number of companies such as NVIDIA, Mobileye, Intel, Qualcomm and Samsung are developing ConvNet chips to enable real-time vision applications in smartphones, cameras, robots and self-driving cars. Deep-learning theory shows that deep nets have two different exponential advantages over classic learning algorithms that do not use distributed representations21. Both of these advantages arise from the power of composition and depend on the underlying data-generating distribution having an appropriate componential structure40. First, learning distributed representations enable generalization to new combinations of the values of learned features beyond those seen during training (for example, 2n combinations are possible with n binary features)68, 69. Second, composing layers of representation in a deep net brings the potential for another exponential advantage70 (exponential in the depth). The hidden layers of a multilayer neural network learn to represent the network's inputs in a way that makes it easy to predict the target outputs. This is nicely demonstrated by training a multilayer neural network to predict the next word in a sequence from a local context of earlier words71. Each word in the context is presented to the network as a one-of-N vector, that is, one component has a value of 1 and the rest are 0. In the first layer, each word creates a different pattern of activations, or word vectors (Fig. 4). In a language model, the other layers of the network learn to convert the input word vectors into an output word vector for the predicted next word, which can be used to predict the probability for any word in the vocabulary to appear as the next word. The network learns word vectors that contain many active components each of which can be interpreted as a separate feature of the word, as was first demonstrated27 in the context of learning distributed representations for symbols. These semantic features were not explicitly present in the input. They were discovered by the learning procedure as a good way of factorizing the structured relationships between the input and output symbols into multiple 'micro-rules'. Learning word vectors turned out to also work very well when the word sequences come from a large corpus of real text and the individual micro-rules are unreliable71. When trained to predict the next word in a news story, for example, the learned word vectors for Tuesday and Wednesday are very similar, as are the word vectors for Sweden and Norway. Such representations are called distributed representations because their elements (the features) are not mutually exclusive and their many configurations correspond to the variations seen in the observed data. These word vectors are composed of learned features that were not determined ahead of time by experts, but automatically discovered by the neural network. Vector representations of words learned from text are now very widely used in natural language applications14, 17, 72, 73, 74, 75, 76. The issue of representation lies at the heart of the debate between the logic-inspired and the neural-network-inspired paradigms for cognition. In the logic-inspired paradigm, an instance of a symbol is something for which the only property is that it is either identical or non-identical to other symbol instances. It has no internal structure that is relevant to its use; and to reason with symbols, they must be bound to the variables in judiciously chosen rules of inference. By contrast, neural networks just use big activity vectors, big weight matrices and scalar non-linearities to perform the type of fast 'intuitive' inference that underpins effortless commonsense reasoning. Before the introduction of neural language models71, the standard approach to statistical modelling of language did not exploit distributed representations: it was based on counting frequencies of occurrences of short symbol sequences of length up to N (called N-grams). The number of possible N-grams is on the order of VN, where V is the vocabulary size, so taking into account a context of more than a handful of words would require very large training corpora. N-grams treat each word as an atomic unit, so they cannot generalize across semantically related sequences of words, whereas neural language models can because they associate each word with a vector of real valued features, and semantically related words end up close to each other in that vector space (Fig. 4). When backpropagation was first introduced, its most exciting use was for training recurrent neural networks (RNNs). For tasks that involve sequential inputs, such as speech and language, it is often better to use RNNs (Fig. 5). RNNs process an input sequence one element at a time, maintaining in their hidden units a 'state vector' that implicitly contains information about the history of all the past elements of the sequence. When we consider the outputs of the hidden units at different discrete time steps as if they were the outputs of different neurons in a deep multilayer network (Fig. 5, right), it becomes clear how we can apply backpropagation to train RNNs. RNNs are very powerful dynamic systems, but training them has proved to be problematic because the backpropagated gradients either grow or shrink at each time step, so over many time steps they typically explode or vanish77, 78. Thanks to advances in their architecture79, 80 and ways of training them81, 82, RNNs have been found to be very good at predicting the next character in the text83 or the next word in a sequence75, but they can also be used for more complex tasks. For example, after reading an English sentence one word at a time, an English 'encoder' network can be trained so that the final state vector of its hidden units is a good representation of the thought expressed by the sentence. This thought vector can then be used as the initial hidden state of (or as extra input to) a jointly trained French 'decoder' network, which outputs a probability distribution for the first word of the French translation. If a particular first word is chosen from this distribution and provided as input to the decoder network it will then output a probability distribution for the second word of the translation and so on until a full stop is chosen17, 72, 76. Overall, this process generates sequences of French words according to a probability distribution that depends on the English sentence. This rather naive way of performing machine translation has quickly become competitive with the state-of-the-art, and this raises serious doubts about whether understanding a sentence requires anything like the internal symbolic expressions that are manipulated by using inference rules. It is more compatible with the view that everyday reasoning involves many simultaneous analogies that each contribute plausibility to a conclusion84, 85. Instead of translating the meaning of a French sentence into an English sentence, one can learn to 'translate' the meaning of an image into an English sentence (Fig. 3). The encoder here is a deep ConvNet that converts the pixels into an activity vector in its last hidden layer. The decoder is an RNN similar to the ones used for machine translation and neural language modelling. There has been a surge of interest in such systems recently (see examples mentioned in ref. 86). RNNs, once unfolded in time (Fig. 5), can be seen as very deep feedforward networks in which all the layers share the same weights. Although their main purpose is to learn long-term dependencies, theoretical and empirical evidence shows that it is difficult to learn to store information for very long78. To correct for that, one idea is to augment the network with an explicit memory. The first proposal of this kind is the long short-term memory (LSTM) networks that use special hidden units, the natural behaviour of which is to remember inputs for a long time79. A special unit called the memory cell acts like an accumulator or a gated leaky neuron: it has a connection to itself at the next time step that has a weight of one, so it copies its own real-valued state and accumulates the external signal, but this self-connection is multiplicatively gated by another unit that learns to decide when to clear the content of the memory. LSTM networks have subsequently proved to be more effective than conventional RNNs, especially when they have several layers for each time step87, enabling an entire speech recognition system that goes all the way from acoustics to the sequence of characters in the transcription. LSTM networks or related forms of gated units are also currently used for the encoder and decoder networks that perform so well at machine translation17, 72, 76. Over the past year, several authors have made different proposals to augment RNNs with a memory module. Proposals include the Neural Turing Machine in which the network is augmented by a 'tape-like' memory that the RNN can choose to read from or write to88, and memory networks, in which a regular network is augmented by a kind of associative memory89. Memory networks have yielded excellent performance on standard question-answering benchmarks. The memory is used to remember the story about which the network is later asked to answer questions. Beyond simple memorization, neural Turing machines and memory networks are being used for tasks that would normally require reasoning and symbol manipulation. Neural Turing machines can be taught 'algorithms'. Among other things, they can learn to output a sorted list of symbols when their input consists of an unsorted sequence in which each symbol is accompanied by a real value that indicates its priority in the list88. Memory networks can be trained to keep track of the state of the world in a setting similar to a text adventure game and after reading a story, they can answer questions that require complex inference90. In one test example, the network is shown a 15-sentence version of the The Lord of the Rings and correctly answers questions such as “where is Frodo now?”89. Unsupervised learning91, 92, 93, 94, 95, 96, 97, 98 had a catalytic effect in reviving interest in deep learning, but has since been overshadowed by the successes of purely supervised learning. Although we have not focused on it in this Review, we expect unsupervised learning to become far more important in the longer term. Human and animal learning is largely unsupervised: we discover the structure of the world by observing it, not by being told the name of every object. Human vision is an active process that sequentially samples the optic array in an intelligent, task-specific way using a small, high-resolution fovea with a large, low-resolution surround. We expect much of the future progress in vision to come from systems that are trained end-to-end and combine ConvNets with RNNs that use reinforcement learning to decide where to look. Systems combining deep learning and reinforcement learning are in their infancy, but they already outperform passive vision systems99 at classification tasks and produce impressive results in learning to play many different video games100. Natural language understanding is another area in which deep learning is poised to make a large impact over the next few years. We expect systems that use RNNs to understand sentences or whole documents will become much better when they learn strategies for selectively attending to one part at a time76, 86. Ultimately, major progress in artificial intelligence will come about through systems that combine representation learning with complex reasoning. Although deep learning and simple reasoning have been used for speech and handwriting recognition for a long time, new paradigms are needed to replace rule-based manipulation of symbolic expressions by operations on large vectors101. Download references The authors would like to thank the Natural Sciences and Engineering Research Council of Canada, the Canadian Institute For Advanced Research (CIFAR), the National Science Foundation and Office of Naval Research for support. Y.L. and Y.B. are CIFAR fellows. Reprints and permissions information is available at www.nature.com/reprints.
Language-based information-flow security Current standard security practices do not provide substantial assurance that the end-to-end behavior of a computing system satisfies important security policies such as confidentiality. An end-to-end confidentiality policy might assert that secret input data cannot be inferred by an attacker through the attacker's observations of system output; this policy regulates information flow. Conventional security mechanisms such as access control and encryption do not directly address the enforcement of information-flow policies. Previously, a promising new approach has been developed: the use of programming-language techniques for specifying and enforcing information-flow policies. In this paper, we survey the past three decades of research on information-flow security, particularly focusing on work that uses static program analysis to enforce information-flow policies. We give a structured view of work in the area and identify some important open challenges.
An 8-bit 100-mhz cmos linear interpolation dac An 8-bit 100-MHz CMOS linear interpolation digital-to-analog converter (DAC) is presented. It applies a time-interleaved structure on an 8-bit binary-weighted DAC, using 16 evenly skewed clocks generated by a voltage-controlled delay line to realize the linear interpolation function. The linear interpolation increases the attenuation of the DAC&#39;s image components. The requirement for the analog re...
Dynamic adaptive virtual core mapping to improve power, energy, and performance in multi-socket multicores Consider a multithreaded parallel application running inside a multicore virtual machine context that is itself hosted on a multi-socket multicore physical machine. How should the VMM map virtual cores to physical cores? We compare a local mapping, which compacts virtual cores to processor sockets, and an interleaved mapping, which spreads them over the sockets. Simply choosing between these two mappings exposes clear tradeoffs between performance, energy, and power. We then describe the design, implementation, and evaluation of a system that automatically and dynamically chooses between the two mappings. The system consists of a set of efficient online VMM-based mechanisms and policies that (a) capture the relevant characteristics of memory reference behavior, (b) provide a policy and mechanism for configuring the mapping of virtual machine cores to physical cores that optimizes for power, energy, or performance, and (c) drive dynamic migrations of virtual cores among local physical cores based on the workload and the currently specified objective. Using these techniques we demonstrate that the performance of SPEC and PARSEC benchmarks can be increased by as much as 66%, energy reduced by as much as 31%, and power reduced by as much as 17%, depending on the optimization objective.
Decentralized adaptive tracking control for a class of interconnected nonlinear time-varying systems In this paper, aiming at output tracking, a decentralized adaptive backstepping control scheme is proposed for a class of interconnected nonlinear time-varying systems. By introducing a bound estimation approach and two smooth functions, the obstacle caused by unknown time-varying parameters and unknown interactions is circumvented and all signals of the overall closed-loop system are proved to be globally uniformly bounded, without any restriction on the parameters variation speed. Moreover, it is shown that the tracking errors can converge to predefined arbitrarily small residual sets with prescribed convergence rate and maximum overshoot, independent of the parameters variation speed and the strength of interactions. Simulation results performed on double inverted pendulums are presented to illustrate the effectiveness of the proposed scheme.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.073362
0.068932
0.068932
0.066667
0.066667
0.041111
0.022222
0.004274
0.000008
0
0
0
0
0
ForeGraph: Exploring Large-scale Graph Processing on Multi-FPGA Architecture. The performance of large-scale graph processing suffers from challenges including poor locality, lack of scalability, random access pattern, and heavy data conflicts. Some characteristics of FPGA make it a promising solution to accelerate various applications. For example, on-chip block RAMs can provide high throughput for random data access. However, large-scale processing on a single FPGA chip is constrained by limited on-chip memory resources and off-chip bandwidth. Using a multi-FPGA architecture may alleviate these problems to some extent, while the data partitioning and communication schemes should be considered to ensure the locality and reduce data conflicts. In this paper, we propose ForeGraph, a large-scale graph processing framework based on the multi-FPGA architecture. In ForeGraph, each FPGA board only stores a partition of the entire graph in off-chip memory. Communication over partitions is reduced. Vertices and edges are sequentially loaded onto the FPGA chip and processed. Under our scheduling scheme, each FPGA chip performs graph processing in parallel without conflicts. We also analyze the impact of system parameters on the performance of ForeGraph. Our experimental results on Xilinx Virtex UltraScale XCVU190 chip show ForeGraph outperforms state-of-the-art FPGA-based large-scale graph processing systems by 4.54x when executing PageRank on the Twitter graph (1.4 billion edges). The average throughput is over 900 MTEPS in our design and 2.03x larger than previous work.
Algorithm 915, SuiteSparseQR: Multifrontal multithreaded rank-revealing sparse QR factorization SuiteSparseQR is a sparse QR factorization package based on the multifrontal method. Within each frontal matrix, LAPACK and the multithreaded BLAS enable the method to obtain high performance on multicore architectures. Parallelism across different frontal matrices is handled with Intel's Threading Building Blocks library. The symbolic analysis and ordering phase pre-eliminates singletons by permuting the input matrix A into the form [R11 R12; 0 A22] where R11 is upper triangular with diagonal entries above a given tolerance. Next, the fill-reducing ordering, column elimination tree, and frontal matrix structures are found without requiring the formation of the pattern of ATA. Approximate rank-detection is performed within each frontal matrix using Heath's method. While Heath's method is not always exact, it has the advantage of not requiring column pivoting and thus does not interfere with the fill-reducing ordering. For sufficiently large problems, the resulting sparse QR factorization obtains a substantial fraction of the theoretical peak performance of a multicore computer.
Winnowing: local algorithms for document fingerprinting Digital content is for copying: quotation, revision, plagiarism, and file sharing all create copies. Document fingerprinting is concerned with accurately identifying copying, including small partial copies, within large sets of documents.We introduce the class of local document fingerprinting algorithms, which seems to capture an essential property of any finger-printing technique guaranteed to detect copies. We prove a novel lower bound on the performance of any local algorithm. We also develop winnowing, an efficient local fingerprinting algorithm, and show that winnowing's performance is within 33% of the lower bound. Finally, we also give experimental results on Web data, and report experience with MOSS, a widely-used plagiarism detection service.
Towards on-node Machine Learning for Ultra-low-power Sensors Using Asynchronous Σ Δ Streams AbstractWe propose a novel architecture to enable low-power, complex on-node data processing, for the next generation of sensors for the internet of things (IoT), smartdust, or edge intelligence. Our architecture combines near-analog-memory-computing (NAM) and asynchronous-computing-with-streams (ACS), eliminating the need for ADCs. ACS enables ultra-low power, massive computational resources required to execute on-node complex Machine Learning (ML) algorithms; while NAM addresses the memory-wall that represents a common bottleneck for ML and other complex functions. In ACS an analog value is mapped to an asynchronous stream that can take one of two logic levels (vh, vl). This stream-based data representation enables area/power-efficient computing units such as a multiplier implemented as an AND gate yielding savings in power of ∼90% compared to digital approaches. The generation of streams for NAM and ACS in a brute force manner, using analog-to-digital-converters (ADCs) and digital-to-streams-converters, would sky-rocket the power-latency-energy cost making the approach impractical. Our NAM-ACS architecture eliminates expensive conversions, enabling an end-to-end processing on asynchronous streams data-path. We tailor the NAM-ACS architecture for random forest (RaF), an ML algorithm, chosen for its ability to classify using a reduced number of features. Simulations show that our NAM-ACS architecture enables 75% of savings in power compared with a single ADC, obtaining a classification accuracy of 85% using an RaF-inspired algorithm.
CuSha: vertex-centric graph processing on GPUs Vertex-centric graph processing is employed by many popular algorithms (e.g., PageRank) due to its simplicity and efficient use of asynchronous parallelism. The high compute power provided by SIMT architecture presents an opportunity for accelerating these algorithms using GPUs. Prior works of graph processing on a GPU employ Compressed Sparse Row (CSR) form for its space-efficiency; however, CSR suffers from irregular memory accesses and GPU underutilization that limit its performance. In this paper, we present CuSha, a CUDA-based graph processing framework that overcomes the above obstacle via use of two novel graph representations: G-Shards and Concatenated Windows (CW). G-Shards uses a concept recently introduced for non-GPU systems that organizes a graph into autonomous sets of ordered edges called shards. CuSha's mapping of GPU hardware resources on to shards allows fully coalesced memory accesses. CW is a novel representation that enhances the use of shards to achieve higher GPU utilization for processing sparse graphs. Finally, CuSha fully utilizes the GPU power by processing multiple shards in parallel on GPU's streaming multiprocessors. For ease of programming, CuSha allows the user to define the vertex-centric computation and plug it into its framework for parallel processing of large graphs. Our experiments show that CuSha provides significant speedups over the state-of-the-art CSR-based virtual warp-centric method for processing graphs on GPUs.
NVSim-CAM: a circuit-level simulator for emerging nonvolatile memory based content-addressable memory. Ternary Content-Addressable Memory (TCAM) is widely used in networking routers, fully associative caches, search engines, etc. While the conventional SRAM-based TCAM suffers from the poor scalability, the emerging nonvolatile memories (NVM, i.e., MRAM, PCM, and ReRAM) bring evolution for the TCAM design. It effectively reduces the cell size, and makes significant energy reduction and scalability improvement. New applications such as associative processors/accelerators are facilitated by the emergence of the nonvolatile TCAM (nvTCAM). However, nvTCAM design is challenging. In addition to the emerging device's uncertainty, the nvTCAM cell structure is so diverse that it results in a design space too large to explore manually. To tackle these challenges, we propose a circuit-level model and develop a simulation tool, NVSim-CAM, which helps researchers to make early design decisions, and to evaluate device/circuit innovations. The tool is validated by HSPICE simulations and data from fabricated chips. We also present a case study to illustrate how NVSim-CAM benefits the nvTCAM design. In the case study, we propose a novel 3D vertical ReRAM based TCAM cell, the 3DvTCAM. We project the advantages/disadvantages and explore the design space for the proposed cell with NVSim-CAM.
GenAx: A Genome Sequencing Accelerator. Genomics can transform health-care through precision medicine. Plummeting sequencing costs would soon make genome testing affordable to the masses. Compute efficiency, however, has to improve by orders of magnitude to sequence and analyze the raw genome data. Sequencing software used today can take several hundreds to thousands of CPU hours to align reads to a reference sequence. This paper presents GenAx, an accelerator for read alignment, a time-consuming step in genome sequencing. It consists of a seeding and seed-extension accelerator. The latter is based on an innovative automata design that was designed from the ground-up to enable hardware acceleration. Unlike conventional Levenshtein automata, it is string independent and scales quadratically with edit distance, instead of string length. It supports critical features commonly used in sequencing such as affine gap scoring and traceback. GenAx provides a throughput of 4,058K reads/s for Illumina 101 bp reads. GenAx achieves 31.7x speedup over the standard BWA-MEM sequence aligner running on a 56--thread dualsocket 14-core Xeon E5 server processor, while reducing power consumption by 12 x and area by 5.6 x.
Deep Residual Learning for Image Recognition Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers - 8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.
Multi-Strategy Coevolving Aging Particle Optimization We propose Multi-Strategy Coevolving Aging Particles (MS-CAP), a novel population-based algorithm for black-box optimization. In a memetic fashion, MS-CAP combines two components with complementary algorithm logics. In the first stage, each particle is perturbed independently along each dimension with a progressively shrinking (decaying) radius, and attracted towards the current best solution with an increasing force. In the second phase, the particles are mutated and recombined according to a multi-strategy approach in the fashion of the ensemble of mutation strategies in Differential Evolution. The proposed algorithm is tested, at different dimensionalities, on two complete black-box optimization benchmarks proposed at the Congress on Evolutionary Computation 2010 and 2013. To demonstrate the applicability of the approach, we also test MS-CAP to train a Feedforward Neural Network modeling the kinematics of an 8-link robot manipulator. The numerical results show that MS-CAP, for the setting considered in this study, tends to outperform the state-of-the-art optimization algorithms on a large set of problems, thus resulting in a robust and versatile optimizer.
The software radio concept Since early 1980 an exponential blowup of cellular mobile systems has been observed, which has produced, all over the world, the definition of a plethora of analog and digital standards. In 2000 the industrial competition between Asia, Europe, and America promises a very difficult path toward the definition of a unique standard for future mobile systems, although market analyses underline the trading benefits of a common worldwide standard. It is therefore in this field that the software radio concept is emerging as a potential pragmatic solution: a software implementation of the user terminal able to dynamically adapt to the radio environment in which it is, time by time, located. In fact, the term software radio stands for radio functionalities defined by software, meaning the possibility to define by software the typical functionality of a radio interface, usually implemented in TX and RX equipment by dedicated hardware. The presence of the software defining the radio interface necessarily implies the use of DSPs to replace dedicated hardware, to execute, in real time, the necessary software. In this article objectives, advantages, problem areas, and technological challenges of software radio are addressed. In particular, SW radio transceiver architecture, possible SW implementation, and its download are analyzed
Dynamic sensor collaboration via sequential Monte Carlo We consider the application of sequential Monte Carlo (SMC) methods for Bayesian inference to the problem of information-driven dynamic sensor collaboration in clutter environments for sensor networks. The dynamics of the system under consideration are described by nonlinear sensing models within randomly deployed sensor nodes. The exact solution to this problem is prohibitively complex due to the nonlinear nature of the system. The SMC methods are, therefore, employed to track the probabilistic dynamics of the system and to make the corresponding Bayesian estimates and predictions. To meet the specific requirements inherent in sensor network, such as low-power consumption and collaborative information processing, we propose a novel SMC solution that makes use of the auxiliary particle filter technique for data fusion at densely deployed sensor nodes, and the collapsed kernel representation of the a posteriori distribution for information exchange between sensor nodes. Furthermore, an efficient numerical method is proposed for approximating the entropy-based information utility in sensor selection. It is seen that under the SMC framework, the optimal sensor selection and collaboration can be implemented naturally, and significant improvement is achieved over existing methods in terms of localizing and tracking accuracies.
Feature selection for medical diagnosis: Evaluation for cardiovascular diseases Machine learning has emerged as an effective medical diagnostic support system. In a medical diagnosis problem, a set of features that are representative of all the variations of the disease are necessary. The objective of our work is to predict more accurately the presence of cardiovascular disease with reduced number of attributes. We investigate intelligent system to generate feature subset with improvement in diagnostic performance. Features ranked with distance measure are searched through forward inclusion, forward selection and backward elimination search techniques to find subset that gives improved classification result. We propose hybrid forward selection technique for cardiovascular disease diagnosis. Our experiment demonstrates that this approach finds smaller subsets and increases the accuracy of diagnosis compared to forward inclusion and back-elimination techniques.
Computing the Dynamic Diameter of Non-Deterministic Dynamic Networks is Hard. A dynamic network is a communication network whose communication structure can evolve over time. The dynamic diameter is the counterpart of the classical static diameter, it is the maximum time needed for a node to causally influence any other node in the network. We consider the problem of computing the dynamic diameter of a given dynamic network. If the evolution is known a priori, that is if the network is deterministic, it is known it is quite easy to compute this dynamic diameter. If the evolution is not known a priori, that is if the network is non-deterministic, we show that the problem is hard to solve or approximate. In some cases, this hardness holds also when there is a static connected subgraph for the dynamic network. In this note, we consider an important subfamily of non-deterministic dynamic networks: the time-homogeneous dynamic networks. We prove that it is hard to compute and approximate the value of the dynamic diameter for time-homogeneous dynamic networks.
A Hybrid 1<sup>st</sup>/2<sup>nd</sup>-Order VCO-Based CTDSM With Rail-to-Rail Artifact Tolerance for Bidirectional Neural Interface Bi-directional brain machine interfaces enable simultaneous brain activity monitoring and neural modulation. However stimulation artifact can saturate instrumentation front-end, while concurrent on-site recording is needed. This brief presents a voltage controlled-oscillator (VCO) based continuous-time <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\rm \Delta \Sigma $ </tex-math></inline-formula> modulator (CTDSM) with rail-to-rail input range and fast artifact tracking. A hybrid <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}/2^{nd}$ </tex-math></inline-formula> -order loop is designed to achieve high dynamic range (DR) and large input range. Stimulation artifact is detected by a phase counter and compensated by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}$ </tex-math></inline-formula> -order loop. The residue signal is digitized by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2^{nd}$ </tex-math></inline-formula> -order loop for high precision. Redundancy between the two loops is implemented as feedback capacitors elements with non-binary ratio to guarantee feedback stability and linearity. Fabricated in a 55-nm CMOS process, the prototype achieves 65.7dB SNDR across 10 kHz bandwidth with a full scale of 200 mV <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">pp</sub> . And a ±1.2 V input range is achieved to suppress artifacts. Saline based experiment with simultaneous stimulation and recording demonstrates that the implemented system can track and tolerate rail-to-rail stimulation artifact within 30 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{s}$ </tex-math></inline-formula> while small neural signal can be continuously monitored.
1.11
0.1
0.06
0.05
0.03
0.01
0.005
0.00006
0
0
0
0
0
0
Planning of the DC System Considering Restrictions on the Small-Signal Stability of EV Charging Stations and Comparison Between Series and Parallel Connections Series and parallel connections are proposed for electric vehicle charging stations (EVCSs) to satisfy the demands of large EVs. In this study, simplified linearized models of a DC network integrated with large EVCSs, which adopts series and parallel connections (SC-EVCS and PC-EVCS, respectively), are established. In this study, the control subsystem of each EVCS is demonstrated to have weak interaction with the filter subsystem and the DC network. Moreover, a method for designing proper parameters to ensure self-stability of the EVCS is proposed. Additionally, when multiple EVCSs are connected to the DC network, the interaction among the connected EVCSs may decrease the DC system stability. Therefore, the positive-net-damping stability criterion is used to analyze the small-signal stability of the DC system, confirming that the instability of the DC system is caused by two factors, the filter subsystem of the EVCS and the equivalent impedance of the DC network. Considering the restrictions on stability, the maximum number of connections of SC-EVCSs and PC-EVCSs should be limited in planning. Furthermore, the study theoretically proves that compared with PC-EVCS, SC-EVCS can service more EVCSs to the DC network while ensuring stability. Finally, an example using MATLAB is presented to demonstrate the analytical conclusions.
Machine-to-machine communications for home energy management system in smart grid. Machine-to-machine (M2M) communications have emerged as a cutting edge technology for next-generation communications, and are undergoing rapid development and inspiring numerous applications. This article presents an investigation of the application of M2M communications in the smart grid. First, an overview of M2M communications is given. The enabling technologies and open research issues of M2M ...
Reduced-Order Model and Stability Analysis of Low-Voltage DC Microgrid. Depleting fossil fuels, increasing energy demand, and need for high-reliability power supply motivate the use of dc microgrids. This paper analyzes the stability of low-voltage dc microgrid systems. Sources are controlled using a droop-based decentralized controller. Various components of the system have been modeled. A linearized system model is derived using small-signal approximation. The stability of the system is analyzed by identifying the eigenvalues of the system matrix. The sufficiency condition for stable operation of the system is derived. It provides upper bound on droop constants and is useful during planning and designing of dc microgrids. Furthermore, the sensitivity of system poles to variation in cable resistance and inductance is identified. It is proved that the poles move further inside the negative real plane with a decrease in inductance or an increase in resistance. The method proposed in this paper is applicable to any interconnecting structure of sources and loads. The results obtained by analysis are verified by detailed simulation study. Root locus plots are included to confirm the movement of system poles. The viability of the model is confirmed by experimental results from a scaled-down laboratory prototype of a dc microgrid developed for the purpose.
High-Fidelity Model Order Reduction for Microgrids Stability Assessment. Proper modeling of inverter-based microgrids is crucial for accurate assessment of stability boundaries. It has been recently realized that the stability conditions for such microgrids are significantly different from those known for large-scale power systems. In particular, the network dynamics, despite its fast nature, appears to have major influence on stability of slower modes. While detailed ...
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
How to share a secret In this paper we show how to divide data D into n pieces in such a way that D is easily reconstructable from any k pieces, but even complete knowledge of k - 1 pieces reveals absolutely no information about D. This technique enables the construction of robust key management schemes for cryptographic systems that can function securely and reliably even when misfortunes destroy half the pieces and security breaches expose all but one of the remaining pieces.
A new approach to state observation of nonlinear systems with delayed output The article presents a new approach for the construction of a state observer for nonlinear systems when the output measurements are available for computations after a nonnegligible time delay. The proposed observer consists of a chain of observation algorithms reconstructing the system state at different delayed time instants (chain observer). Conditions are given for ensuring global exponential convergence to zero of the observation error for any given delay in the measurements. The implementation of the observer is simple and computer simulations demonstrate its effectiveness.
Theory and Applications of Robust Optimization In this paper we survey the primary research, both theoretical and applied, in the area of robust optimization (RO). Our focus is on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying prominent theoretical results of RO, we also present some recent results linking RO to adaptable models for multistage decision-making problems. Finally, we highlight applications of RO across a wide spectrum of domains, including finance, statistics, learning, and various areas of engineering.
Cache attacks and countermeasures: the case of AES We describe several software side-channel attacks based on inter-process leakage through the state of the CPU’s memory cache. This leakage reveals memory access patterns, which can be used for cryptanalysis of cryptographic primitives that employ data-dependent table lookups. The attacks allow an unprivileged process to attack other processes running in parallel on the same processor, despite partitioning methods such as memory protection, sandboxing and virtualization. Some of our methods require only the ability to trigger services that perform encryption or MAC using the unknown key, such as encrypted disk partitions or secure network links. Moreover, we demonstrate an extremely strong type of attack, which requires knowledge of neither the specific plaintexts nor ciphertexts, and works by merely monitoring the effect of the cryptographic process on the cache. We discuss in detail several such attacks on AES, and experimentally demonstrate their applicability to real systems, such as OpenSSL and Linux’s dm-crypt encrypted partitions (in the latter case, the full key can be recovered after just 800 writes to the partition, taking 65 milliseconds). Finally, we describe several countermeasures for mitigating such attacks.
A normal form for XML documents This paper takes a first step towards the design and normalization theory for XML documents. We show that, like relational databases, XML documents may contain redundant information, and may be prone to update anomalies. Furthermore, such problems are caused by certain functional dependencies among paths in the document. Our goal is to find a way of converting an arbitrary DTD into a well-designed one, that avoids these problems. We first introduce the concept of a functional dependency for XML, and define its semantics via a relational representation of XML. We then define an XML normal form, XNF, that avoids update anomalies and redundancies. We study its properties and show that it generalizes BCNF and a normal form for nested relations when those are appropriately coded as XML documents. Finally, we present a lossless algorithm for converting any DTD into one in XNF.
Synchronization via Pinning Control on General Complex Networks. This paper studies synchronization via pinning control on general complex dynamical networks, such as strongly connected networks, networks with a directed spanning tree, weakly connected networks, and directed forests. A criterion for ensuring network synchronization on strongly connected networks is given. It is found that the vertices with very small in-degrees should be pinned first. In addition, it is shown that the original condition with controllers can be reformulated such that it does not depend on the form of the chosen controllers, which implies that the vertices with very large out-degrees may be pinned. Then, a criterion for achieving synchronization on networks with a directed spanning tree, which can be composed of many strongly connected components, is derived. It is found that the strongly connected components with very few connections from other components should be controlled and the components with many connections from other components can achieve synchronization even without controls. Moreover, a simple but effective pinning algorithm for reaching synchronization on a general complex dynamical network is proposed. Finally, some simulation examples are given to verify the proposed pinning scheme.
Digital signal processors in cellular radio communications Contemporary wireless communications are based on digital communications technologies. The recent commercial success of mobile cellular communications has been enabled in part by successful designs of digital signal processors with appropriate on-chip memories and specialized accelerators for digital transceiver operations. This article provides an overview of fixed point digital signal processors and ways in which they are used in cellular communications. Directions for future wireless-focused DSP technology developments are discussed
Optimum insertion/deletion point selection for fractional sample rate conversion In this paper, an optimum insertion/deletion point selection algorithm for fractional sample rate conversion (SRC) is proposed. The direct insertion/deletion technique achieves low complexity and low power consumption as compared to the other fractional SRC methods. Using a multiple set insertion/deletion technique is efficient for reduction of distortion caused by the insertion/deletion step. When the conversion factor is (N ±¿)/N, the number of possible patterns of insertion/deletion points and the number of combinations for multiple set inserters/deleters grow as ¿ increases. The proposed algorithm minimizes the distortion due to SRC by selecting the patterns and the combinations.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
0
0
A 5-MHz 91% peak-power-efficiency buck regulator with auto-selectable peak- and valley-current control This paper presents a multi-MHz buck regulator for portable applications using an auto-selectable peak- and valley-current control (ASPVCC) scheme. The proposed ASPVCC scheme and the dynamically-biased shunt feedback in the current sensors relax the settling-time requirement of the current sensing and improve the sensing speed. The proposed converter can thus operate at high switching frequencies with a wide range of duty ratios for reducing the required inductance. Implemented in a 0.35-μm CMOS process, the proposed buck converter can operate at 5-MHz with a duty-ratio range of 0.6, use a small-value off-chip inductor of 1 μH, and achieve 91% peak power efficiency.
A 1.2A buck-boost LED driver with 13% efficiency improvement using error-averaged SenseFET-based current sensing.
Digitally Controlled Current-Mode DC–DC Converter IC The main focus of this paper is the implementation of mixed-signal peak current mode control in low-power dc-dc converters for portable applications. A DAC is used to link the digital voltage loop compensator to the analog peak current mode loop. Conventional DAC architectures, such as flash or ΔΣ are not suitable due to excessive power consumption and limited bandwidth of the reconstruction filter, respectively. The charge-pump based DAC (CP-DAC) used in this work has relatively poor linearity compared to more expensive DAC topologies; however, this can be tolerated since the linearity has a minor effect on the converter dynamics as long as the limit-cycle conditions are met. The CP-DAC has a guaranteed monotonic behavior from the digital current command to the peak inductor current, which is essential for maintaining stability. A buck converter IC, which was fabricated in a 0.18 μm CMOS process with 5 V compatible transistors, achieves a response time of 4 μs at fs=3&nbsp;MHz and Vout=1 V, for a 200 mA load-step. The active area of the controller is only 0.077 mm2, and the total controller current-draw, which is heavily dominated by the on-chip senseFET current-sensor, is below 250 μA for a load current of Iout=50 mA.
An Integrated Speed- and Accuracy-Enhanced CMOS Current Sensor With Dynamically Biased Shunt Feedback for Current-Mode Buck Regulators This paper presents a new compact on-chip current-sensing circuit to enable current-mode buck regulators operating at a high switching frequency for reducing the inductor profile. A dynamically biased shunt feedback technique is developed in the proposed current sensor to push nondominant poles to higher frequencies, thereby improving the speed and stability of the current sensor under a wide range of load currents. A feedforward gain stage in the proposed current sensor also increases the dc loop-gain magnitude and thus enhances the accuracy of the current sensing. A current-mode buck regulator with the proposed current sensor has been implemented in a standard 0.35-μm CMOS process. Measurement results show that the proposed current sensor can achieve 95% sensing accuracy and <;; 50-ns settling time. The buck converter can thus operate properly at the switching frequency of 2.5 MHz with the duty cycle down to 0.3. The output ripple voltage of the regulator is <;; 43 mV with a 4.7-μF off-chip capacitor and a 2.2-μH off-chip inductor. The power efficiency of the buck regulator achieves above 80% over the load current ranging from 25 to 500 mA.
Variable Off-Time Control Loop for Current-Mode Floating Buck Converters in LED Driving Applications A versatile controller architecture, used in current-mode floating buck converters for LED driving, is developed. State-of-the-art controllers rely on a fixed switching period and variable duty cycle, focusing on current averaging circuits. Instead, the proposed controller architecture is based on fixed peak current and adaptable off time as the average current control method. The control loop is comprised of an averaging block, transconductance amplifier, and an innovative time modulator. This modulator is intended to provide constant control loop response regardless of input voltage, current storage inductor, and number of LEDs in order to improve converter applicability for LED drivers. Fabricated in a 5 V standard 0.5 μm CMOS technology, the prototype controller is implemented and tested in a current-mode floating buck converter. The converter exhibits sound continuous conduction mode (CCM) operation for input voltages between 11 and 20 V, and a wide inductor range of 100-1000 μH. In all instances, the measured average LED current variation was lower than 10% of the desired value. A maximum conversion efficiency of 91% is obtained when driving 50 mA through four LEDs (with 14 V input voltage and an inductor of 470 μH). A stable CCM converter operation is also proven by simulation for nine LEDs and 45 V input voltage.
A Delay-Locked Loop Synchronization Scheme for High-Frequency Multiphase Hysteretic DC-DC Converters This paper reports a delay-locked loop (DLL) based hysteretic controller for high-frequency multiphase dc-dc buck converters. The DLL control loop employs the switching frequency of a hysteretic comparator as reference to automatically synchronize the remaining phases and eliminate the need for external synchronization. A dedicated duty cycle control loop is used to enable current sharing and ripple cancellation. We demonstrate a four-phase high-frequency buck converter that operates at 25-70 MHz with fast hysteretic control and output conversion range of 17.5%-80%. The converter achieves an efficiency of 83% at 2 W and 80% at 3.3 W. The circuit has been implemented in standard 0.5 mum 5 V CMOS process.
A 10/30 MHz Fast Reference-Tracking Buck Converter With DDA-Based Type-III Compensator A 10/30 MHz voltage-mode controlled buck converter with a wide duty-cycle range is presented. A high-accuracy delay-compensated ramp generator using only low-speed comparators but can work up to 70 MHz is proposed. By using a differential difference amplifier (DDA), a new Type-III compensator is proposed to reduce the chip area of the compensator by 60%. Moreover, based on the unique structure of the proposed compensator, an end-point prediction (EPP) scheme is also implemented to achieve fast reference-tracking responses. The converter was fabricated in a 0.13 μm standard CMOS process. It achieves wide duty-cycle ranges of 0.75 and 0.59 when switching at 10 MHz and 30 MHz with peak efficiencies of 91.8% and 86.6%, respectively. The measured maximum output power is 3.6 W with 2.4 V output voltage and 1.5 A load current. With a constant load current of 500 mA, the up-tracking speeds for switching frequencies of 10 MHz and 30 MHz are 1.67 μs/V and 0.67 μs/V, respectively. The down-tracking speeds for 10 MHz and 30 MHz are 4.44 μs/V and 1.56 μs/V, respectively.
A Double-Tail Latch-Type Voltage Sense Amplifier with 18ps Setup+Hold Time.
Bandwidth extension in CMOS with optimized on-chip inductors We present a technique for enhancing the bandwidth of gigahertz broad-band circuitry by using optimized on-chip spiral inductors as shunt-peaking elements. The series resistance of the on-chip inductor is incorporated as part of the load resis- tance to permit a large inductance to be realized with minimum area and capacitance. Simple, accurate inductance expressions are used in a lumped circuit inductor model to allow the passive and active components in the circuit to be simultaneously optimized. A quick and efficient global optimization method, based on geometric programming, is discussed. The bandwidth extension technique is applied in the implementation of a 2.125-Gbaud preamplifier that employs a common-gate input stage followed by a cascoded common-source stage. On-chip shunt peaking is introduced at the dominant pole to improve the overall system performance, including a 40% increase in the transimpedance. This implementation achieves a 1.6-k transimpedance and a 0.6- A input-referred current noise, while operating with a photodiode capacitance of 0.6 pF. A fully differential topology ensures good substrate and supply noise immunity. The amplifier, implemented in a triple-metal, single-poly, 14-GHz , 0.5- m CMOS process, dissipates 225 mW, of which 110 mW is consumed by the 50- output driver stage. The optimized on-chip inductors consume only 15% of the total area of 0.6 mm . in the 1-2-GHz range. This paper discusses how optimized on-chip inductors can be used to enhance the bandwidth of broad-band amplifiers and thereby push the performance limits of CMOS implementations. An attractive feature of this technique is that the bandwidth enhancement comes with no additional power dissipation. This bandwidth enhancement is achieved by shunt peaking, a method first used in the 1940's to extend the bandwidth of television tubes. Section II describes the fundamentals of this approach. Section III focuses on how shunt-peaked amplifiers can be implemented in the integrated circuit environment. A well-accepted lumped circuit model for a spiral inductor is used along with recently developed inductance expressions to allow the inductor modeling to be performed in a standard circuit de- sign environment such as SPICE. This approach circumvents the inconvenient, iterative interface between an inductor simulator and a circuit design tool. Most important, a new design method- ology is described that yields a large inductance in a small die area. The new method is implemented using a simple and efficient circuit design computer-aided design tool described in Section IV. This tool is based on geometric programming (GP), a spe- cial type of optimization problem for which very efficient global optimization methods have been developed. An attractive fea- ture of this technique is that it enables the designer to optimize passive and active devices simultaneously. This feature allows a shunt-peaked amplifier with on-chip inductors to be optimized directly from specifications. Sections V and VI illustrate how shunt peaking is used to improve the performance of a transimpedance preamplifier. A prototype preamplifier, intended for gigabit optical communi- cation systems, is implemented in a 0.5- m CMOS process. The use of on-chip shunt peaking permits a 40% increase in the transimpedance with no additional power dissipation. The op- timized on-chip inductors only consume 15% of the total chip area.
The M-Machine multicomputer The M-Machine is an experimental multicomputer being developed to test architectural concepts motivated by the constraints of modern semiconductor technology and the demands of programming systems. The M-Machine computing nodes are connected with a 3-D mesh network; each node is a multithreaded processor incorporating 9 function units, on-chip cache, and local memory. The multiple function units are used to exploit both instruction-level and thread-level parallelism. A user accessible message passing system yields fast communication and synchronization between nodes. Rapid access to remote memory is provided transparently to the user with a combination of hardware and software mechanisms. This paper presents the architecture of the M-Machine and describes how its mechanisms attempt to maximize both single thread performance and overall system throughput. The architecture is complete and the MAP chip, which will serve as the M-Machine processing node, is currently being implemented.
A simple graph theoretic characterization of reachability for positive linear systems In this paper we consider discrete-time linear positive systems, that is systems defined by a pair (A,B) of non-negative matrices. We study the reachability of such systems which in this case amounts to the freedom of steering the state in the positive orthant by using non-negative control sequences. This problem was solved recently [Canonical forms for positive discrete-time linear control systems, Linear Algebra Appl., 310 (2000) 49]. However we derive here necessary and sufficient conditions for reachability in a simpler and more compact form. These conditions are expressed in terms of particular paths in the graph which is naturally associated with the system.
Fpga Implementation Of High-Frequency Software Radio Receiver State-of-the-art analog-to-digital converters allow the design of high-frequency software radio receivers that use baseband signal processing. However, such receivers are rarely considered in literature. In this paper, we describe the design of a high-performance receiver operating at high frequencies, whose digital part is entirely implemented in an FPGA device. The design of digital subsystem is given, together with the design of a low-cost analog front end.
Towards elastic SDR architectures using dynamic task management. SDR platforms integrating several types and numbers of processing elements in System-on-Chips become an attractive solution for baseband processing in wireless systems. In order to cope with the diversity of protocol applications and the heterogeneity of multi-core architectures, a hierarchical approach for workload distribution is proposed in this paper. Specifically, a system-level scheduler is employed to map applications to multiple processing clusters, complemented with a cluster-level scheduler - the CoreManager - for dynamic resource allocation and configuration as well as for task and data scheduling. A performance analysis of the proposed approach is presented, which shows the advantages of dynamic scheduling against a static approach for variable workloads in the LTE-Advanced uplink multi-user scenarios.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.021335
0.018462
0.015385
0.014044
0.007692
0.004141
0.000228
0.000015
0
0
0
0
0
0
An Efficient Flexible Common Operator for FFT and Viterbi Algorithms Today's telecommunication systems require more and more flexibility, and reconfiguration mechanisms are becoming major topics especially when it comes to multi-standard designs. This paper capitalizes on the Common Operator technique to present a new common operator for the FFT and Viterbi algorithms. In the present work, the FFT/Viterbi common butterfly is investigated where reuse and power consumption is traded against throughput. Performance comparisons with similar works are discussed in this paper.
A common operator for FFT and FEC decoding In the Software Radio context, the parametrization is becoming an important topic especially when it comes to multi-standard designs. This paper capitalizes on the common operator technique to present new common structures for the FFT and FEC decoding algorithms. A key benefit of exhibiting common operators is the regular architecture it brings when implemented in a Common Operator Bank (COB). This regularity makes the architecture open to future function mapping and adapted to accommodated silicon technology variability through dependable design.
Promising Technique of Parameterization For Reconfigurable Radio, the Common Operators Technique: Fundamentals and Examples In the field of Software Radio (SWR), parameterization studies have become a very important topic. This is mainly because parameterization will probably decrease the size of the software to be downloaded, and also because it will limit the reconfiguration time. In this paper, parameterization is considered as a digital radio design methodology. Two different techniques, namely common functions and common operators are considered. In this paper, the second view is developed and illustrated by two examples: the well known Fast Fourier Transform (FFT) and the proposed Reconfigurable Linear Feedback Shift Register (R-LFSR), derived from the classical Linear Feedback Shift Register (LFSR) structure.
The CORDIC Trigonometric Computing Technique The COordinate Rotation DIgital Computer(CORDIC) is a special-purpose digital computer for real-time airborne computation. In this computer, a unique computing technique is employed which is especially suitable for solving the trigonometric relationships involved in plane coordinate rotation and conversion from rectangular to polar coordinates. CORDIC is an entire-transfer computer; it contains a special serial arithmetic unit consisting of three shift registers, three adder-subtractors, and special interconnections. By use of a prescribed sequence of conditional additions or subtractions, the CORDIC arithmetic unit can be controlled to solve either set of the following equations: Y' = K(Y cos¿ + X sin¿) X' = K(X cos¿ - Y sin¿), or R = K¿X2 + Y2 ¿ = tan-1 Y/X, where K is an invariable constant. This special arithmetic unit is also suitable for other computations such as multiplication, division, and the conversion between binary and mixed radix number systems. However, only the trigonometric algorithms used in this computer and the instrumentation of these algorithms are discussed in this paper.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
How to share a secret In this paper we show how to divide data D into n pieces in such a way that D is easily reconstructable from any k pieces, but even complete knowledge of k - 1 pieces reveals absolutely no information about D. This technique enables the construction of robust key management schemes for cryptographic systems that can function securely and reliably even when misfortunes destroy half the pieces and security breaches expose all but one of the remaining pieces.
A new approach to state observation of nonlinear systems with delayed output The article presents a new approach for the construction of a state observer for nonlinear systems when the output measurements are available for computations after a nonnegligible time delay. The proposed observer consists of a chain of observation algorithms reconstructing the system state at different delayed time instants (chain observer). Conditions are given for ensuring global exponential convergence to zero of the observation error for any given delay in the measurements. The implementation of the observer is simple and computer simulations demonstrate its effectiveness.
ImageNet Classification with Deep Convolutional Neural Networks. We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0%, respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.
Estimation of entropy and mutual information We present some new results on the nonparametric estimation of entropy and mutual information. First, we use an exact local expansion of the entropy function to prove almost sure consistency and central limit theorems for three of the most commonly used discretized information estimators. The setup is related to Grenander's method of sieves and places no assumptions on the underlying probability measure generating the data. Second, we prove a converse to these consistency theorems, demonstrating that a misapplication of the most common estimation techniques leads to an arbitrarily poor estimate of the true information, even given unlimited data. This "inconsistency" theorem leads to an analytical approximation of the bias, valid in surprisingly small sample regimes and more accurate than the usual 1/N formula of Miller and Madow over a large region of parameter space. The two most practical implications of these results are negative: (1) information estimates in a certain data regime are likely contaminated by bias, even if "bias-corrected" estimators are used, and (2) confidence intervals calculated by standard techniques drastically underestimate the error of the most common estimation methods.Finally, we note a very useful connection between the bias of entropy estimators and a certain polynomial approximation problem. By casting bias calculation problems in this approximation theory framework, we obtain the best possible generalization of known asymptotic bias results. More interesting, this framework leads to an estimator with some nice properties: the estimator comes equipped with rigorous bounds on the maximum error over all possible underlying probability distributions, and this maximum error turns out to be surprisingly small. We demonstrate the application of this new estimator on both real and simulated data.
Data Space Randomization Over the past several years, US-CERT advisories, as well as most critical updates from software vendors, have been due to memory corruption vulnerabilities such as buffer overflows, heap overflows, etc. Several techniques have been developed to defend against the exploitation of these vulnerabilities, with the most promising defenses being based on randomization. Two randomization techniques have been explored so far: address space randomization (ASR) that randomizes the location of objects in virtual memory, and instruction set randomization (ISR) that randomizes the representation of code. We explore a third form of randomization called data space randomization (DSR) that randomizes the representation of data stored in program memory. Unlike ISR, DSR is effective against non-control data attacks as well as code injection attacks. Unlike ASR, it can protect against corruption of non-pointer data as well as pointer-valued data. Moreover, DSR provides a much higher range of randomization (typically 232 for 32-bit data) as compared to ASR. Other interesting aspects of DSR include (a) it does not share a weakness common to randomization-based defenses, namely, susceptibility to information leakage attacks, and (b) it is capable of detecting some exploits that are missed by full bounds-checking techniques, e.g., some of the overflows from one field of a structure to the next field. Our implementation results show that with appropriate design choices, DSR can achieve a performance overhead in the range of 5% to 30% for a range of programs.
Online design bug detection: RTL analysis, flexible mechanisms, and evaluation Higher level of resource integration and the addition of new features in modern multi-processors put a significant pressure on their verification. Although a large amount of resources and time are devoted to the verification phase of modern processors, many design bugs escape the verification process and slip into processors operating in the field. These design bugs often lead to lower quality products, lower customer satisfaction, diminishing brand/company reputation, or even expensive product recalls.
IEEE 802.11 wireless LAN implemented on software defined radio with hybrid programmable architecture This paper describes a prototype software defined radio (SDR) transceiver on a distributed and heterogeneous hybrid programmable architecture; it consists of a central processing unit (CPU), digital signal processors (DSPs), and pre/postprocessors (PPPs), and supports both Personal Handy Phone System (PHS), and IEEE 802.11 wireless local area network (WLAN). It also supports system switching between PHS and WLAN and over-the-air (OTA) software downloading. In this paper, we design an IEEE 802.11 WLAN around the SDR; we show the software architecture of the SDR prototype and describe how it handles the IEEE 802.11 WLAN protocol. The medium access control (MAC) sublayer functions are executed on the CPU, while the physical layer (PHY) functions such as modulation/demodulation are processed by the DSPs; higher speed digital signal processes are run on the PPP implemented on a field-programmable gate array (FPGA). The most difficult problem in implementing the WLAN in this way is meeting the short interframe space (SIFS) requirement of the IEEE 802.11 standard; we elucidate the potential weakness of the current configuration and specify a way of implementing the IEEE 802.11 protocol that avoids this problem. This paper also describes an experimental evaluation of the prototype for WLAN use, the results of which agree well with computer-simulation results.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.1
0.1
0.05
0.033333
0
0
0
0
0
0
0
0
0
0
Arbitrary Analog/RF Spatial Filtering for Digital MIMO Receiver Arrays. Traditional digital multiple-input multiple-output (MIMO) receivers that feature element-level digitization face high instantaneous dynamic range challenges in the analog/RF domain due to the absence of analog/RF spatial filtering. Existing analog/RF spatial notch filtering techniques are limited in their noise, linearity, and spatial filtering bandwidth performance. More importantly, only single ...
A 9&#8211;31-GHz Subharmonic Passive Mixer in 90-nm CMOS Technology A subharmonic down-conversion passive mixer is designed and fabricated in a 90-nm CMOS technology. It utilizes a single active device and operates in the LO source-pumped mode, i.e., the LO signal is applied to the source and the RF signal to the gate. When driven by an LO signal whose frequency is only half of the fundamental mixer, the mixer exhibits a conversion loss as low as 8-11 dB over a wide RF frequency range of 9-31GHz. This performance is superior to the mixer operating in the gate-pumped mode where the mixer shows a conversion loss of 12-15dB over an RF frequency range of 6.5-20 GHz. Moreover, this mixer can also operate with an LO signal whose frequency is only 1/3 of the fundamental one, and achieves a conversion loss of 12-15dB within an RF frequency range of 12-33 GHz. The IF signal is always extracted from the drain via a low-pass filter which supports an IF frequency range from DC to 2 GHz. These results, for the first time, demonstrate the feasibility of implementation of high-frequency wideband subharmonic passive mixers in a low-cost CMOS technology
120-GHz Wideband I/Q Receiver Based on Baseband Equalizing Technique In this study, we examined a 120-GHz wideband I/Q receiver based on a baseband equalizing amplifier using 40-nm complementary metal oxide semiconductor (CMOS) technology. For low-power operation, the receiver chipset is integrated based on the direct conversion structure. To achieve high data-rate wireless communication, the receiver utilizes a frequency equalizing technique between the low noise ...
A High-Fractional-Bandwidth, Millimeter-Wave Bidirectional Image-Selection Architecture With Narrowband LO Tuning Requirements. An image-selection, two-element transceiver is presented that operates over a large fractional bandwidth (FBW) covering both 71-76 and 81-86 GHz while requiring only 3 GHz of local oscillator (LO) tuning range at the RF mixer. A bidirectional sliding-intermediate frequency (IF) Weaver architecture allows operation in either transmit (TX) or receive (RX) modes. The sliding IF and narrow LO tuning r...
Wide-Band CMOS Low-Noise Amplifier Exploiting Thermal Noise Canceling Known elementary wide-band amplifiers suffer from a fundamental tradeoff between noise figure (NF) and source impedance matching, which limits the NF to values typically above 3 dB. Global negative feedback can be used to break this tradeoff, however, at the price of potential instability. In contrast, this paper presents a feedforward noise-canceling technique, which allows for simultaneous noise...
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
Why systolic architectures? First Page of the Article
Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies? Suppose we are given a vector f in a class FsubeRopfN , e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision epsi in the Euclidean (lscr2) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector |f| (or of its coefficients in a fixed basis) obeys |f|(n)lesRmiddotn-1p/, where R>0 and p>0. Suppose that we take measurements yk=langf# ,Xkrang,k=1,...,K, where the Xk are N-dimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0<p<1 and with overwhelming probability, our reconstruction ft, defined as the solution to the constraints yk=langf# ,Xkrang with minimal lscr1 norm, obeys parf-f#parlscr2lesCp middotRmiddot(K/logN)-r, r=1/p-1/2. There is a sense in which this result is optimal; it is generally impossible to obtain a higher accuracy from any set of K measurements whatsoever. The methodology extends to various other random measurement ensembles; for example, we show that similar results hold if one observes a few randomly sampled Fourier coefficients of f. In fact, the results are quite general and require only two hypotheses on the measurement ensemble which are detailed
Efficient Cache Attacks on AES, and Countermeasures We describe several software side-channel attacks based on inter-process leakage through the state of the CPU's memory cache. This leakage reveals memory access patterns, which can be used for cryptanalysis of cryptographic primitives that employ data-dependent table lookups. The attacks allow an unprivileged process to attack other processes running in parallel on the same processor, despite partitioning methods such as memory protection, sandboxing, and virtualization. Some of our methods require only the ability to trigger services that perform encryption or MAC using the unknown key, such as encrypted disk partitions or secure network links. Moreover, we demonstrate an extremely strong type of attack, which requires knowledge of neither the specific plaintexts nor ciphertexts and works by merely monitoring the effect of the cryptographic process on the cache. We discuss in detail several attacks on AES and experimentally demonstrate their applicability to real systems, such as OpenSSL and Linux's dm-crypt encrypted partitions (in the latter case, the full key was recovered after just 800 writes to the partition, taking 65 milliseconds). Finally, we discuss a variety of countermeasures which can be used to mitigate such attacks.
A normal form for XML documents This paper takes a first step towards the design and normalization theory for XML documents. We show that, like relational databases, XML documents may contain redundant information, and may be prone to update anomalies. Furthermore, such problems are caused by certain functional dependencies among paths in the document. Our goal is to find a way of converting an arbitrary DTD into a well-designed one, that avoids these problems. We first introduce the concept of a functional dependency for XML, and define its semantics via a relational representation of XML. We then define an XML normal form, XNF, that avoids update anomalies and redundancies. We study its properties and show that it generalizes BCNF and a normal form for nested relations when those are appropriately coded as XML documents. Finally, we present a lossless algorithm for converting any DTD into one in XNF.
Synchronization via Pinning Control on General Complex Networks. This paper studies synchronization via pinning control on general complex dynamical networks, such as strongly connected networks, networks with a directed spanning tree, weakly connected networks, and directed forests. A criterion for ensuring network synchronization on strongly connected networks is given. It is found that the vertices with very small in-degrees should be pinned first. In addition, it is shown that the original condition with controllers can be reformulated such that it does not depend on the form of the chosen controllers, which implies that the vertices with very large out-degrees may be pinned. Then, a criterion for achieving synchronization on networks with a directed spanning tree, which can be composed of many strongly connected components, is derived. It is found that the strongly connected components with very few connections from other components should be controlled and the components with many connections from other components can achieve synchronization even without controls. Moreover, a simple but effective pinning algorithm for reaching synchronization on a general complex dynamical network is proposed. Finally, some simulation examples are given to verify the proposed pinning scheme.
A Delay-Locked Loop Synchronization Scheme for High-Frequency Multiphase Hysteretic DC-DC Converters This paper reports a delay-locked loop (DLL) based hysteretic controller for high-frequency multiphase dc-dc buck converters. The DLL control loop employs the switching frequency of a hysteretic comparator as reference to automatically synchronize the remaining phases and eliminate the need for external synchronization. A dedicated duty cycle control loop is used to enable current sharing and ripple cancellation. We demonstrate a four-phase high-frequency buck converter that operates at 25-70 MHz with fast hysteretic control and output conversion range of 17.5%-80%. The converter achieves an efficiency of 83% at 2 W and 80% at 3.3 W. The circuit has been implemented in standard 0.5 mum 5 V CMOS process.
ΣΔ ADC with fractional sample rate conversion for software defined radio receiver.
A Hybrid 1<sup>st</sup>/2<sup>nd</sup>-Order VCO-Based CTDSM With Rail-to-Rail Artifact Tolerance for Bidirectional Neural Interface Bi-directional brain machine interfaces enable simultaneous brain activity monitoring and neural modulation. However stimulation artifact can saturate instrumentation front-end, while concurrent on-site recording is needed. This brief presents a voltage controlled-oscillator (VCO) based continuous-time <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\rm \Delta \Sigma $ </tex-math></inline-formula> modulator (CTDSM) with rail-to-rail input range and fast artifact tracking. A hybrid <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}/2^{nd}$ </tex-math></inline-formula> -order loop is designed to achieve high dynamic range (DR) and large input range. Stimulation artifact is detected by a phase counter and compensated by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}$ </tex-math></inline-formula> -order loop. The residue signal is digitized by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2^{nd}$ </tex-math></inline-formula> -order loop for high precision. Redundancy between the two loops is implemented as feedback capacitors elements with non-binary ratio to guarantee feedback stability and linearity. Fabricated in a 55-nm CMOS process, the prototype achieves 65.7dB SNDR across 10 kHz bandwidth with a full scale of 200 mV <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">pp</sub> . And a ±1.2 V input range is achieved to suppress artifacts. Saline based experiment with simultaneous stimulation and recording demonstrates that the implemented system can track and tolerate rail-to-rail stimulation artifact within 30 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{s}$ </tex-math></inline-formula> while small neural signal can be continuously monitored.
1.2
0.2
0.2
0.1
0.004082
0
0
0
0
0
0
0
0
0
On QUAD, Lipschitz, and Contracting Vector Fields for Consensus and Synchronization of Networks. In this paper, a relationship is discussed between three common assumptions made in the literature to prove local or global asymptotic stability of the synchronization manifold in networks of coupled nonlinear dynamical systems. In such networks, each node, when uncoupled, is described by a nonlinear ordinary differential equation of the form ẋ = f (x,t) . In this paper, we establish links between...
Perception-Based Data Reduction and Transmission of Haptic Data in Telepresence and Teleaction Systems We present a novel approach for the transmission of haptic data in telepresence and teleaction systems. The goal of this work is to reduce the packet rate between an operator and a teleoperator without impairing the immersiveness of the system. Our approach exploits the properties of human haptic perception and is, more specifically, based on the concept of just noticeable differences. In our scheme, updates of the haptic amplitude values are signaled across the network only if the change of a haptic stimulus is detectable by the human operator. We investigate haptic data communication for a 1 degree-of-freedom (DoF) and a 3 DoF teleaction system. Our experimental results show that the presented approach is able to reduce the packet rate between the operator and teleoperator by up to 90% of the original rate without affecting the performance of the system.
Design of a Pressure Control System With Dead Band and Time Delay This paper investigates the control of pressure in a hydraulic circuit containing a dead band and a time varying delay. The dead band is considered as a linear term and a perturbation. A sliding mode controller is designed. Stability conditions are established by making use of Lyapunov Krasovskii functionals, non-perfect time delay estimation is studied and a condition for the effect of uncertainties on the dead zone on stability is derived. Also the effect of different LMI formulations on conservativeness is studied. The control law is tested in practice.
Consensus in switching networks with sectorial nonlinear couplings: Absolute stability approach Consensus algorithms for multi-agent networks with high-order agent dynamics, time-varying topology, and uncertain symmetric nonlinear couplings are considered. Convergence conditions for these algorithms are obtained by means of the Kalman-Yakubovich-Popov lemma and absolute stability techniques. The conditions are similar in spirit and extend the celebrated circle criterion for the stability of Lurie systems.
Observer-Based Event-Triggered Adaptive Fuzzy Control for Leader-Following Consensus of Nonlinear Strict-Feedback Systems In this article, the leader-following consensus problem via the event-triggered control technique is studied for the nonlinear strict-feedback systems with unmeasurable states. The follower's nonlinear dynamics is approximated using the fuzzy-logic systems, and the fuzzy weights are updated in a nonperiodic manner. By introducing a fuzzy state observer to reconstruct the system states, an observer-based event-triggered adaptive fuzzy control and a novel event-triggered condition are designed, simultaneously. In addition, the nonzero positive lower bound on interevent intervals is presented to avoid the Zeno behavior. It is proved via an extension of the Lyapunov approach that ultimately bounded control is achieved for the leader-following consensus of the considered multiagent systems. One remarkable advantage of the proposed control protocol is that the control law and fuzzy weights are updated only when the event-triggered condition is violated, which can greatly decrease the data transmission and communication resource. The simulation results are provided to show the effectiveness of the proposed control strategy and the theoretical analysis.
Designing Fully Distributed Consensus Protocols for Linear Multi-Agent Systems With Directed Graphs This technical note addresses the distributed consensus protocol design problem for multi-agent systems with general linear dynamics and directed communication graphs. Existing works usually design consensus protocols using the smallest real part of the nonzero eigenvalues of the Laplacian matrix associated with the communication graph, which however is global information. In this technical note, based on only the agent dynamics and the relative states of neighboring agents, a distributed adaptive consensus protocol is designed to achieve leader-follower consensus in the presence of a leader with a zero input for any communication graph containing a directed spanning tree with the leader as the root node. The proposed adaptive protocol is independent of any global information of the communication graph and thereby is fully distributed. Extensions to the case with multiple leaders are further studied.
Analysis and Pinning Control for Output Synchronization and <inline-formula> <tex-math notation="LaTeX">$\mathcal{H}_{\infty}$ </tex-math></inline-formula> Output Synchronization of Multiweighted Complex Networks The output synchronization and <inline-formula xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${\mathcal {H}_{\infty }}$ </tex-math></inline-formula> output synchronization problems for multiweighted complex network are discussed in this paper. First, we analyze the output synchronization of multiweighted complex network by exploiting Lyapunov functional and Barbalat’s lemma. In addition, some nodes- and edges-based pinning control strategies are developed to ensure the output synchronization of multiweighted complex network. Similarly, the <inline-formula xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${\mathcal {H}_{\infty }}$ </tex-math></inline-formula> output synchronization problem of multiweighted complex network is also discussed. Finally, two numerical examples are presented to verify the correctness of the obtained results.
Finite-Time Synchronization of Coupled Networks With Markovian Topology and Impulsive Effects. This note considers globally finite-time synchronization of coupled networks with Markovian topology and distributed impulsive effects. The impulses can be synchronizing or desynchronizing with certain average impulsive interval. By using M-matrix technique and designing new Lyapunov functions and controllers, sufficient conditions are derived to ensure the synchronization within a setting time, and the conditions do not contain any uncertain parameter. It is demonstrated theoretically and numerically that the number of consecutive impulses with minimum impulsive interval of the desynchronizing impulsive sequence should not be too large. It is interesting to discover that the setting time is related to initial values of both the network and the Markov chain. Numerical simulations are provided to illustrate the effectiveness of the theoretical analysis.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Synopsis diffusion for robust aggregation in sensor networks Abstract Aggregating sensor readings within the network is an essen - tial technique for conserving energy in sensor networks Pre - vious work proposes aggregating along a tree overlay topol - ogy in order to conserve energy However, a tree overlay is very fragile, and the high rate of node and link failures in sensor networks often results in a large fraction of readings being unaccounted for in the aggregate Value splitting on multi - path overlays, as proposed in TAG, reduces the vari - ance in the error, but still results in signi cant errors Pre - vious approaches are fragile, fundamentally, because they tightly couple aggregate computation and message routing In this paper, we propose a family of aggregation techniques, called synopsis diffusion , that decouples the two, enabling aggregation algorithms and message routing to be optimized independently As a result, the level of redundancy in mes - sage routing (as a trade - off with energy consumption) can be adapted to both expected and encountered network condi - tions We present a number of concrete examples of synopsis diffusion algorithms, including a broadcast - based instantia - tion of synopsis diffusion that is as energy ef cient as a tree, but dramatically more robust
Charge-domain signal processing of direct RF sampling mixer with discrete-time filters in Bluetooth and GSM receivers RF circuits for multi-GHz frequencies have recently migrated to low-cost digital deep-submicron CMOS processes. Unfortunately, this process environment, which is optimized only for digital logic and SRAM memory, is extremely unfriendly for conventional analog and RF designs. We present fundamental techniques recently developed that transform the RF and analog circuit design complexity to digitally intensive domain for a wireless RF transceiver, so that it enjoys benefits of digital and switched-capacitor approaches. Direct RF sampling techniques allow great flexibility in reconfigurable radio design. Digital signal processing concepts are used to help relieve analog design complexity, allowing one to reduce cost and power consumption in a reconfigurable design environment. The ideas presented have been used in Texas Instruments to develop two generations of commercial digital RF processors: a single-chip Bluetooth radio and a single-chip GSM radio. We further present details of the RF receiver front end for a GSM radio realized in a 90-nm digital CMOS technology. The circuit consisting of low-noise amplifier, transconductance amplifier, and switching mixer offers 32.5 dB dynamic range with digitally configurable voltage gain of 40 dB down to 7.5dB. A series of decimation and discrete-time filtering follows the mixer and performs a highly linear second-order lowpass filtering to reject close-in interferers. The front-end gains can be configured with an automatic gain control to select an optimal setting to form a trade-off between noise figure and linearity and to compensate the process and temperature variations. Even under the digital switching activity, noise figure at the 40 dB maximum gain is 1.8 dB and +50 dBm IIP2 at the 34 dB gain. The variation of the input matching versus multiple gains is less than 1 dB. The circuit in total occupies 3.1 mm2 . The LNA, TA, and mixer consume less than 15.3 mA at a supply voltage of 1.4 V.
Correction of Mismatches in a Time-Interleaved Analog-to-Digital Converter in an Adaptively Equalized Digital Communication Receiver In this paper, techniques to overcome the errors caused by the offset, gain, sample-time, and bandwidth mismatches among time-interleaved analog-to-digital converters in a high-speed baseband digital communication receiver are presented. The errors introduced by these mismatches are corrected using least-mean-square adaptation implemented in digital-signal-processing blocks. Gain, sample-time, and bandwidth mismatches are corrected by modifying the operation of the adaptive receive equalizer itself to minimize the hardware overhead. Simulation results show that the gain, offset, sample-time, and bandwidth mismatches are sufficiently corrected for practical digital communication receivers.
Timings Matter: Standard Compliant IEEE 802.11 Channel Access for a Fully Software-based SDR Architecture We present a solution for enabling standard compliant channel access for a fully softwarebased Software Defined Radio (SDR) architecture. With the availability of a GNURadio implementation of an Orthogonal Frequency Division Multiplexing (OFDM) transceiver, there is substantial demand for standard compliant channel access. It has been shown that implementation of CSMA on a host PC is infeasible due to system-inherent delays. The common approach is to fully implement the protocol stack on the FPGA, which makes further updates or modifications to the protocols a complex and time consuming task. We take another approach and investigate the feasibility of a fully software-based solution and show that standard compliant broadcast transmissions are possible with marginal modifications of the FPGA. We envision the use of our system for example in the vehicular networking domain, where broadcast is the main communication paradigm. We show that our SDR solution exactly complies with the IEEE 802.11 Distributed Coordination Function (DCF) as well as Enhanced Distributed Channel Access (EDCA) timings. We were even able to identify shortcomings of commercial systems and prototypes.
A Hybrid 1<sup>st</sup>/2<sup>nd</sup>-Order VCO-Based CTDSM With Rail-to-Rail Artifact Tolerance for Bidirectional Neural Interface Bi-directional brain machine interfaces enable simultaneous brain activity monitoring and neural modulation. However stimulation artifact can saturate instrumentation front-end, while concurrent on-site recording is needed. This brief presents a voltage controlled-oscillator (VCO) based continuous-time <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\rm \Delta \Sigma $ </tex-math></inline-formula> modulator (CTDSM) with rail-to-rail input range and fast artifact tracking. A hybrid <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}/2^{nd}$ </tex-math></inline-formula> -order loop is designed to achieve high dynamic range (DR) and large input range. Stimulation artifact is detected by a phase counter and compensated by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}$ </tex-math></inline-formula> -order loop. The residue signal is digitized by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2^{nd}$ </tex-math></inline-formula> -order loop for high precision. Redundancy between the two loops is implemented as feedback capacitors elements with non-binary ratio to guarantee feedback stability and linearity. Fabricated in a 55-nm CMOS process, the prototype achieves 65.7dB SNDR across 10 kHz bandwidth with a full scale of 200 mV <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">pp</sub> . And a ±1.2 V input range is achieved to suppress artifacts. Saline based experiment with simultaneous stimulation and recording demonstrates that the implemented system can track and tolerate rail-to-rail stimulation artifact within 30 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{s}$ </tex-math></inline-formula> while small neural signal can be continuously monitored.
1.11
0.1
0.1
0.1
0.1
0.0525
0.015
0.00781
0
0
0
0
0
0
The Impact of Jitter on the Signal-to-Noise Ratio in Uniform Bandpass Sampling Receivers Receiver front-ends, enabling multi-mode multi-band operation, are essential for future efficient mobile communications and require a proper parametrization to achieve certain performance requirements. A key component in the receive chain is the analog-to-digital converter (ADC). To determine feasible configurations of the ADC, an abstract model is investigated in order to evaluate the performance in terms of the signal-to-noise ratio (SNR) of bandpass sampling receivers. It models the available types of sampling circuits, the impact of stationary and non-stationary jitter processes, as well as limited quantization resolution. The derived ADC model is used to determine the dominating jitter effect, either aperture or clock jitter, depending on the receiver setup. Furthermore, required root mean square jitter values are derived analytically for a predefined receiver noise figure. A properly designed bandpass sampling receiver, matching the proposed maximum jitter requirements, avoids significant SNR performance losses and can be employed in mobile communications.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Reconfigurable Antenna for Future Wireless Communication Systems This paper deals with the processing techniques which are known as reconfigurable antennas: these methods are foreseen to be a booster for the future high rate wireless communications, both for the benefits in terms of performance and for the capacity gains. In particular, adaptive digital signal processing can provide improved performance for the desired signal in terms of error probability or signal-to-noise ratio while the bandwidth efficiency can be increased linearly with the number of transmitting and receiving antennas. In this article, the main antenna processing techniques are reviewed and described, aiming at highlighting performance/complexity trade-offs and how they could be implemented in the future systems. The coexistence of all these different technologies in a wireless environment requires high efficiency and flexibility of the transceiver. Future transceiver implementations which are based on the Software Defined Radio technology are also reviewed and described.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A Programmer-Interpreter Neural Network Architecture For Prefrontal Cognitive Control There is wide consensus that the prefrontal cortex (PFC) is able to exert cognitive control on behavior by biasing processing toward task-relevant information and by modulating response selection. This idea is typically framed in terms of top-down influences within a cortical control hierarchy, where prefrontalbasal ganglia loops gate multiple input-output channels, which in turn can activate or sequence motor primitives expressed in (pre-) motor cortices. Here we advance a new hypothesis, based on the notion of programmability and an interpreter-programmer computational scheme, on how the PFC can flexibly bias the selection of sensorimotor patterns depending on internal goal and task contexts. In this approach, multiple elementary behaviors representing motor primitives are expressed by a single multi-purpose neural network, which is seen as a reusable area of "recycled"neurons (interpreter). The PFC thus acts as a "programmer" that, without modifying the network connectivity, feeds the interpreter networks with specific input parameters encoding the programs (corresponding to network structures) to be interpreted by the (pre-) motor areas. Our architecture is validated in a standard test for executive function: the 1-2-AX task. Our results show that this computational framework provides a robust, scalable and flexible scheme that can be iterated at different hierarchical layers, supporting the realization of multiple goals. We discuss the plausibility of the "programmer-interpreter" scheme to explain the functioning of prefrontal-(pre) motor cortical hierarchies.
Multiobjective evolutionary algorithms: A survey of the state of the art A multiobjective optimization problem involves several conflicting objectives and has a set of Pareto optimal solutions. By evolving a population of solutions, multiobjective evolutionary algorithms (MOEAs) are able to approximate the Pareto optimal set in a single run. MOEAs have attracted a lot of research effort during the last 20 years, and they are still one of the hottest research areas in the field of evolutionary computation. This paper surveys the development of MOEAs primarily during the last eight years. It covers algorithmic frameworks such as decomposition-based MOEAs (MOEA/Ds), memetic MOEAs, coevolutionary MOEAs, selection and offspring reproduction operators, MOEAs with specific search methods, MOEAs for multimodal problems, constraint handling and MOEAs, computationally expensive multiobjective optimization problems (MOPs), dynamic MOPs, noisy MOPs, combinatorial and discrete MOPs, benchmark problems, performance indicators, and applications. In addition, some future research issues are also presented.
Optimal Tracking Control of Motion Systems Tracking control of motion systems typically requires accurate nonlinear friction models, especially at low speeds, and integral action. However, building accurate nonlinear friction models is time consuming, friction characteristics dramatically change over time, and special care must be taken to avoid windup in a controller employing integral action. In this paper a new approach is proposed for the optimal tracking control of motion systems with significant disturbances, parameter variations, and unmodeled dynamics. The ‘desired’ control signal that will keep the nominal system on the desired trajectory is calculated based on the known system dynamics and is utilized in a performance index to design an optimal controller. However, in the presence of disturbances, parameter variations, and unmodeled dynamics, the desired control signal must be adjusted. This is accomplished by using neural network based observers to identify these quantities, and update the control signal on-line. This formulation allows for excellent motion tracking without the need for the addition of an integral state. The system stability is analyzed and Lyapunov based weight update rules are applied to the neural networks to guarantee the boundedness of the tracking error, disturbance estimation error, and neural network weight errors. Experiments are conducted on the linear axes of a mini CNC machine for the contour control of two orthogonal axes, and the results demonstrate the excellent performance of the proposed methodology.
Adaptive tracking control of leader-follower systems with unknown dynamics and partial measurements. In this paper, a decentralized adaptive tracking control is developed for a second-order leader–follower system with unknown dynamics and relative position measurements. Linearly parameterized models are used to describe the unknown dynamics of a self-active leader and all followers. A new distributed system is obtained by using the relative position and velocity measurements as the state variables. By only using the relative position measurements, a dynamic output–feedback tracking control together with decentralized adaptive laws is designed for each follower. At the same time, the stability of the tracking error system and the parameter convergence are analyzed with the help of a common Lyapunov function method. Some simulation results are presented to validate the proposed adaptive tracking control.
Plug-and-Play Decentralized Model Predictive Control for Linear Systems In this technical note, we consider a linear system structured into physically coupled subsystems and propose a decentralized control scheme capable to guarantee asymptotic stability and satisfaction of constraints on system inputs and states. The design procedure is totally decentralized, since the synthesis of a local controller uses only information on a subsystem and its neighbors, i.e. subsystems coupled to it. We show how to automatize the design of local controllers so that it can be carried out in parallel by smart actuators equipped with computational resources and capable to exchange information with neighboring subsystems. In particular, local controllers exploit tube-based Model Predictive Control (MPC) in order to guarantee robustness with respect to physical coupling among subsystems. Finally, an application of the proposed control design procedure to frequency control in power networks is presented.
Event-Based Leader-following Consensus of Multi-Agent Systems with Input Time Delay The event-based control strategy is an effective methodology for tackling the distributed control of multi-agent systems with limited on-board resources. This technical note focuses on event-based leader-following consensus for multi-agent systems described by general linear models and subject to input time delay between controller and actuator. For each agent, the controller updates are event-based and only triggered at its own event times. A necessary condition and two sufficient conditions on leader-following consensus are presented, respectively. It is shown that continuous communication between neighboring agents can be avoided and the Zeno-behavior of triggering time sequences is excluded. A numerical example is presented to illustrate the effectiveness of the obtained theoretical results.
Building Temperature Control Based on Population Dynamics Temperature control in buildings is a dynamic resource allocation problem, which can be approached using nonlinear methods based on population dynamics (i.e., replicator dynamics). A mathematical model of the proposed control technique is shown, including a stability analysis using passivity concepts for an interconnection of a linear multivariable plant driven by a nonlinear control system. In order to illustrate our control strategy, some simulations are performed, and we compare our proposed technique with other control strategies in a model with a fixed structure. Finally, experimental results are shown in order to observe the performance of some of these strategies in a multizone temperature testbed.
Self-constructing wavelet neural network algorithm for nonlinear control of large structures An adaptive control algorithm is presented for nonlinear vibration control of large structures subjected to dynamic loading. It is based on integration of a self-constructing wavelet neural network (SCWNN) developed specifically for structural system identification with an adaptive fuzzy sliding mode control approach. The algorithm is particularly suitable when the physical properties such as the stiffnesses and damping ratios of the structural system are unknown or partially known which is the case when a structure is subjected to an extreme dynamic event such as an earthquake as the structural properties change during the event. SCWNN is developed for functional approximation of the nonlinear behavior of large structures using neural networks and wavelets. In contrast to earlier work, the identification and control are processed simultaneously which makes the resulting adaptive control more applicable to real life situations. A two-part growing and pruning criterion is developed to construct the hidden layer in the neural network automatically. A fuzzy compensation controller is developed to reduce the chattering phenomenon. The robustness of the proposed algorithm is achieved by deriving a set of adaptive laws for determining the unknown parameters of wavelet neural networks using two Lyapunov functions. No offline training of neural network is necessary for the system identification process. In addition, the earthquake signals are considered as unidentified. This is particularly important for on-line vibration control of large civil structures since the external dynamic loading due to earthquake is not available in advance. The model is applied to vibration control of a continuous cast-in-place prestressed concrete box-girder bridge benchmark problem seismically excited highway.
Gradient-Based Learning Applied to Document Recognition Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper rev...
Local and global properties in networks of processors (Extended Abstract) This paper attempts to get at some of the fundamental properties of distributed computing by means of the following question: “How much does each processor in a network of processors need to know about its own identity, the identities of other processors, and the underlying connection network in order for the network to be able to carry out useful functions?” The approach we take is to require that the processors be designed without any knowledge (or only very broad knowledge) of the networks they are to be used in, and furthermore, that all processors with the same number of communication ports be identical. Given a particular network function, e.g., setting up a spanning tree, we ask whether processors may be designed so that when they are embedded in any connected network and started in some initial configuration, they are guaranteed to accomplish the desired function.
Mdvm System Concept, Paging Latency And Round-2 Randomized Leader Election Algorithm In Sg The future trend in the computing paradigm is marked by mobile computing based on mobile-client/server architecture connected by wireless communication network. However, the mobile computing systems have limitations because of the resource-thin mobile clients operating on battery power. The MDVM system allows the mobile clients to utilize memory and CPU resources of Server-Groups (SG) to overcome the resource limitations of clients in order to support the high-end mobile applications such as, m-commerce and virtual organization (VO). In this paper the concept ofMDVM system and the architecture of cellular network containing the SG are discussed. A round-2 randomized distributed algorithm is proposed to elect a unique leader and co-leader of the SG. The algorithm is free from any assumption about network topology, buffer space limitations and is based on dynamically elected coordinators eliminating single point of failure. The algorithm is implemented in distributed system setup and the network-paging latency values of wired and wireless networks are measured experimentally. The experimental results demonstrate that in most cases the algorithm successfully terminates in first round and the possibility of second round execution decreases significantly with the increase in the size of SG (vertical bar N-a vertical bar). The overall message complexity of the algorithm is O(vertical bar N-a vertical bar). The comparative study of network-paging latencies indicates that 3G/4G mobile communication systems would support the realization of MDVM system.
Sequential approximation of feasible parameter sets for identification with set membership uncertainty In this paper the problem of approximating the feasible parameter set for identification of a system in a set membership setting is considered. The system model is linear in the unknown parameters. A recursive procedure providing an approximation of the parameter set of interest through parallelotopes is presented, and an efficient algorithm is proposed. Its computational complexity is similar to that of the commonly used ellipsoidal approximation schemes. Numerical results are also reported on some simulation experiments conducted to assess the performance of the proposed algorithm.
A 10-Bit 800-MHz 19-mW CMOS ADC A pipelined ADC employs charge-steering op amps to relax the trade-offs among speed, noise, and power consumption. Applying full-rate nonlinearity and gain error calibration, a prototype realized in 65-nm CMOS technology achieves an SNDR of 52.2 dB at an input frequency of 399.2MHz and an FoM of 53 fJ/conversion-step.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.213333
0.213333
0.213333
0.213333
0.213333
0.213333
0.213333
0.06
0
0
0
0
0
0
An Sram-Based Accelerator for Solving Partial Differential Equations Accurate numerical solutions of partial differential equations (PDE) require high-precision fine-grid Jacobi iterations that are demanding in both computation and memory. To reduce the precision and memory, we reformulate the multi-grid Jacobi method in a residual form to enable the mapping of a high-precision PDE solver on SRAMs that perform low-precision parallel multiply-accumulates (MAC) in memory, reducing both energy and area. To improve performance, we employ a DLL to generate well-controlled unit pulses for driving word lines and a dual-ramp single-slope ADC to convert bit line outputs. The design is prototyped in a 1.87mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> 180nm test chip made of four 320×64 MAC-SRAMs, each supporting 128× parallel 5b×5b MACs with 32 5b output ADCs and consuming 16.6mW at 200MHz. The test chip is demonstrated to reach an error tolerance of 10 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">-8</sup> in solving PDEs at 56.9GOPS.
An Energy-Efficient Programmable Mixed-Signal Accelerator for Machine Learning Algorithms We propose PROMISE, the first end-to-end design of a PROgrammable MIxed-Signal accElerator from Instruction Set Architecture to high-level language compiler for acceleration of diverse machine learning algorithms by exploiting the advantage of the superior energy efficiency from analog/mixed-signal processing.
Fundamental Limits on Energy-Delay-Accuracy of In-Memory Architectures in Inference Applications This article obtains fundamental limits on the computational precision of in-memory computing architectures (IMCs). An IMC noise model and associated signal-to-noise ratio (SNR) metrics are defined and their interrelationships analyzed to show that the accuracy of IMCs is fundamentally limited by the compute SNR ( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${\mathrm {SNR}}_{a}$ </tex-math></inline-formula> ) of its analog core, and that activation, weight, and output (ADC) precision needs to be assigned appropriately for the final output SNR ( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${\mathrm {SNR}}_{T}$ </tex-math></inline-formula> ) to approach <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${\mathrm {SNR}}_{a}$ </tex-math></inline-formula> . The minimum precision criterion (MPC) is proposed to minimize the analog-to-digital converter (ADC) precision and hence its overhead. Three in-memory compute models—charge summing (QS), current summing (IS), and charge redistribution (QR)—are shown to underlie most known IMCs. Noise, energy, and delay expressions for the compute models are developed and employed to derive expressions for the SNR, ADC precision, energy, and latency of IMCs. The compute SNR expressions are validated via Monte Carlo simulations in a 65 nm CMOS process. For a 512 row SRAM array, it is shown that: 1) IMCs have an upper bound on their maximum achievable <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${\mathrm {SNR}}_{a}$ </tex-math></inline-formula> due to constraints on energy, area and voltage swing, and this upper bound reduces with technology scaling for QS-based architectures; 2) MPC enables <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${\mathrm {SNR}}_{T}$ </tex-math></inline-formula> to approach <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${\mathrm {SNR}}_{a}$ </tex-math></inline-formula> to be realized with minimal ADC precision; and 3) QS-based (QR-based) architectures are preferred for low (high) compute SNR scenarios.
Shannon-Inspired Statistical Computing for the Nanoscale Era. Modern day computing systems are based on the von Neumann architecture proposed in 1945 but face dual challenges of: 1) unique data-centric requirements of emerging applications and 2) increased nondeterminism of nanoscale technologies caused by process variations and failures. This paper presents a Shannon-inspired statistical model of computation (statistical computing) that addresses the statis...
9.5 A 6K-MAC Feature-Map-Sparsity-Aware Neural Processing Unit in 5nm Flagship Mobile SoC On-device machine learning is critical for mobile products as it enables real-time applications (e.g. AI-powered camera applications), which need to be responsive, always available (i.e. do not require network connectivity) and privacy preserving. The platforms used in such situations have limited computing resources, power, and memory bandwidth. Enabling such on-device machine learning has triggered wide development of efficient neural-network accelerators that promise high energy and area efficiency compared to general-purpose processors, such as CPUs. The need to support a comprehensive range of neural networks has been important as well because the field of deep learning is evolving rapidly as depicted in Fig. 9.5.1. Recent work on neural-network accelerators has focused on improving energy efficiency, while obtaining high performance in order to meet the needs of real-time applications. For example, weightzero-skipping and pruning have been deployed in recent accelerators [2] –[7]. SIMD or systolic array-based accelerators [2] –[4], [6] provide flexibility to support various types of compute across a wide range of Deep Neural Network (DNN) models.
Mixed-Signal Computing for Deep Neural Network Inference Modern deep neural networks (DNNs) require billions of multiply-accumulate operations per inference. Given that these computations demand relatively low precision, it is feasible to consider analog computing, which can be more efficient than digital in the low-SNR regime. This overview article investigates the potential of mixed analog/digital computing approaches in the context of modern DNN processor architectures, which are typically limited by memory access. We discuss how memory-like and in-memory compute fabrics may help alleviate this bottleneck and derive asymptotic efficiency limits at the processing array level. It is shown that single-digit fJ/op energy efficiencies are feasible for 4-bit mixed-signal arithmetic. In this analysis, special consideration is given to the SNR and amortization requirements of the analog-digital interfaces. In addition, we consider the pros and cons for a variety of implementation styles and highlight the challenge of retaining high compute efficiency for a complete DNN accelerator design.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Measuring the Gap Between FPGAs and ASICs ABSTRACT This paper presents experimental measurements of the differences between a 90nm CMOS FPGA and 90nm CMOS Standard Cell ASICs in terms of logic density, circuit speed and power consumption. We are motivated to make these measurements to enable system designers to make better informed choices between these two media and to give insight to FPGA makers on the deciencies to attack and thereby improve FPGAs. In the paper, we describe the methodology by which the measurements were obtained and we show that, for circuits containing only combinational logic and,ipops, the ratio of silicon area required to implement them in FPGAs and ASICs is on average 40. Modern FPGAs also contain \hard" blocks such as multiplier/accumulators and block memories,and we nd,that these blocks reduce this average area gap signican tly to as little as 21. The ratio of critical path delay, from FPGA to ASIC, is roughly 3 to 4, with less inuence from block memory and hard multipliers. The dynamic power consumption ratio is approximately 12 times and, with hard blocks, this gap generally becomes smaller. Categories and Subject Descriptors
Termination detection for diffusing computations
Distributed multi-agent optimization with state-dependent communication We study distributed algorithms for solving global optimization problems in which the objective function is the sum of local objective functions of agents and the constraint set is given by the intersection of local constraint sets of agents. We assume that each agent knows only his own local objective function and constraint set, and exchanges information with the other agents over a randomly varying network topology to update his information state. We assume a state-dependent communication model over this topology: communication is Markovian with respect to the states of the agents and the probability with which the links are available depends on the states of the agents. We study a projected multi-agent subgradient algorithm under state-dependent communication. The state-dependence of the communication introduces significant challenges and couples the study of information exchange with the analysis of subgradient steps and projection errors. We first show that the multi-agent subgradient algorithm when used with a constant stepsize may result in the agent estimates to diverge with probability one. Under some assumptions on the stepsize sequence, we provide convergence rate bounds on a “disagreement metric” between the agent estimates. Our bounds are time-nonhomogeneous in the sense that they depend on the initial starting time. Despite this, we show that agent estimates reach an almost sure consensus and converge to the same optimal solution of the global optimization problem with probability one under different assumptions on the local constraint sets and the stepsize sequence.
Cache Games -- Bringing Access-Based Cache Attacks on AES to Practice Side channel attacks on cryptographic systems exploit information gained from physical implementations rather than theoretical weaknesses of a scheme. In recent years, major achievements were made for the class of so called access-driven cache attacks. Such attacks exploit the leakage of the memory locations accessed by a victim process. In this paper we consider the AES block cipher and present an attack which is capable of recovering the full secret key in almost real time for AES-128, requiring only a very limited number of observed encryptions. Unlike previous attacks, we do not require any information about the plaintext (such as its distribution, etc.). Moreover, for the first time, we also show how the plaintext can be recovered without having access to the cipher text at all. It is the first working attack on AES implementations using compressed tables. There, no efficient techniques to identify the beginning of AES rounds is known, which is the fundamental assumption underlying previous attacks. We have a fully working implementation of our attack which is able to recover AES keys after observing as little as 100 encryptions. It works against the OpenS SL 0.9.8n implementation of AES on Linux systems. Our spy process does not require any special privileges beyond those of a standard Linux user. A contribution of probably independent interest is a denial of service attack on the task scheduler of current Linux systems (CFS), which allows one to observe (on average) every single memory access of a victim process.
A 60-GHz 16QAM/8PSK/QPSK/BPSK Direct-Conversion Transceiver for IEEE802.15.3c. This paper presents a 60-GHz direct-conversion transceiver using 60-GHz quadrature oscillators. The transceiver has been fabricated in a standard 65-nm CMOS process. It in cludes a receiver with a 17.3-dB conversion gain and less than 8.0-dB noise figure, a transmitter with a 18.3-dB conversion gain, a 9.5-dBm output 1 dB compression point, a 10.9-dBm saturation output power and 8.8-% power added ...
A control engineering perspective to radio resource management challenges in emerging cellular/“noncellular” radio systems The technological evolution of the wireless cellular systems has been very rapid in last two decades. In the coming decade of “converging wireless networks/systems/ecosystems”, there is an increasing demand on achieving very high data rates ubiquitously even with high mobile speeds as if we connected to a wired ADSL! Radio Resource Management (RRM) for the emerging wireless systems will be the key mechanism for achieving such high data rates. Indeed, RRM has already been a hot research area in both academia and industry for decades. And due to the complexity of the emerging wireless systems, an interdisciplinary approach and/or methodology is needed to tackle the new RRM challenges. In this paper, we provide a control engineering view onto some of the RRM challenges in emerging wireless networks, with a special emphasis on distributed power control. For example, we establish a link between power control design and dynamic neural networks, two different areas whose scope of interest, motivations and settings are completely different. Here, we emphasize the importance and the need of interdisciplinary approach. Some subjects to be addressed within the paper shall include future-generation cellular/“noncellular” systems, radio resource management challenges, energy efficiency and distributed power control algorithms, variable-structure-systems based power control, channel/frequency allocation, spectral-clustering based channel allocation, Hopfield neural networks.
A Sub- $\mu$ W Reconfigurable Front-End for Invasive Neural Recording That Exploits the Spectral Characteristics of the Wideband Neural Signal This paper presents a sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> ac-coupled reconfigurable front-end for invasive wideband neural signal recording. The proposed topology embeds filtering capabilities enabling the selection of different frequency bands inside the neural signal spectrum. Power consumption is optimized by defining specific noise targets for each sub-band. These targets take into account the spectral characteristics of wideband neural signals: local field potentials (LFP) exhibit <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mathrm {1/f^{x}}$ </tex-math></inline-formula> magnitude scaling while action potentials (AP) show uniform magnitude across frequency. Additionally, noise targets also consider electrode noise and the spectral distribution of noise sources in the circuit. An experimentally verified prototype designed in a standard 180 nm CMOS process draws 815 nW from a 1 V supply. The front-end is able to select among four different frequency bands (modes) up to 5 kHz. The measured input-referred spot-noise at 500 Hz in the LFP mode (1 Hz - 700 Hz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$55~nV/\sqrt {Hz}$ </tex-math></inline-formula> while the integrated noise in the AP mode (200 Hz - 5 kHz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$4.1~\mu Vrms$ </tex-math></inline-formula> . The proposed front-end achieves sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> operation without penalizing other specifications such as input swing, common-mode or power-supply rejection ratios. It reduces the power consumption of neural front-ends with spectral selectivity by <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$6.1\times $ </tex-math></inline-formula> and, compared with conventional wideband front-ends, it obtains a reduction of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.5\times $ </tex-math></inline-formula> .
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
Variation tolerant high resolution and low latency time-to-digital converter A high resolution time-to-digital converter (TDC) with low latency and low deadtime is proposed. A coarse time quantization derived from a differential inverter delay line is locally interpolated with passive voltage dividers. The high resolution TDC is monotonic by construction which makes the concept very robust against process variations. The feasibility is demonstrated with an 8-bit TDC with a resolution of 0.25 inverter delays in a 90 nm low power CMOS technology. The resolution limits imposed by clock uncertainty and local variations are derived theoretically.
A Transmitter Architecture Based on Delta–Sigma Modulation and Switch-Mode Power Amplification This brief presents a method of deploying RF switch-mode power amplification for varying envelope signals. Thereby the power amplifier can be operated as a switch with a high power efficiency as the result. The key idea is to transmit either a full RF period or none at all in such a way that the correct modulated RF signal is obtained after filtering. This is accomplished in a novel configuration of a low-pass DeltaSigma modulator using a phase modulated clock combined with a simple AND-gate. The designed modulator is easy to implement, displays very good linearity and offers time domain signals that promote the power efficiency of the power amplifier. The working principle is described through theory and simulations, and validation is done via measurements on a prototype of the modulator. Measurements on the prototype show that the presented modulator modulates a UMTS signal with more than 10-dB margin to the spectrum mask and EVM below 0.85% RMS (req<17.5%). Delta-sigma, power amplifier (PA), RF, switch mode, transmitter architecture, varying envelope.
The frequency spectrum of pulse width modulated signals The determination of the frequency spectrum of a pulse width modulated (PWM) signal with general band-limited input x(t) has been an open problem for many years. We describe a new approach that gives exact analytical expressions for the spectra of uniform-sampling PWM signals and natural-sampling PWM signals with single-edge as well as with double-edge modulation. For the special case of single tone modulating signals, our results reduce to those obtained previously using a double Fourier series method. We also show that if the maximum magnitude of the derivative of x(t) is smaller than twice the carrier frequency, then a PWM signal consists of a baseband signal y(t) together with y(t) phase-modulated onto each carrier harmonic, where, for uniform-sampling PWM, y(t) is a nonlinear function of the modulating signal x(t), while for natural-sampling PWM, y(t) is just x(t) itself, that is, there is no distortion in the baseband when natural sampling is used.
A Class-E PA With Pulse-Width and Pulse-Position Modulation in 65 nm CMOS A class-E power amplifier (PA) utilizes differential switches and a tuned passive output network improves power-added efficiency (PAE) and insensitivity to amplitude variations at its input. A modulator is introduced that takes outphased waveforms as its inputs and generates a pulse-width and pulse-position modulated (PWPM) signal as its output. The PWPM modulator is used in conjunction with a class-E PA to efficiently amplify constant envelope (e.g., GMSK) and non-constant envelope (e.g., QPSK, QAM, OFDM) signals with moderate peak-to-average ratios (PAR). The measured maximum output power of the PA is 28.6 dBm with a PAE of 28.5%, and the measured error vector magnitude (EVM) is 1.2% and 4.6% for GMSK and pi/4-DQPSK (PAR ap 4 dB) modulated signals, respectively.
A Multiphase Buck Converter With a Rotating Phase-Shedding Scheme For Efficient Light-Load Control Mobile devices need to minimize their power consumption in order to maximize battery runtime, except during short extremely busy periods. This requirement makes dc-dc converters usually operate in standby mode or under light-load conditions. Therefore, implementation of an efficient regulation scheme under a light load is a key aspect of dc-dc converter design. This paper presents a multiphase buck converter with a rotating phase-shedding scheme for efficient light-load control. The converter includes four phases operating in an interleaved manner in order to supply high current with low output ripple. The multiphase converter implements a rotating phase-shedding scheme to distribute the switching activity concentrated on a single phase, resulting in a distribution of the aging effects among the phases instead of a single phase. The proposed multiphase buck converter was fabricated using a 0.18 μm bipolar CMOS DMOS process. The supply voltage ranges from 2.7 V to 5 V, and the maximum allowable output current is 4.5 A.
A Second-Order Antialiasing Prefilter for a Software-Defined Radio Receiver A new architecture is presented for a sinc2(f) filter intended to sample channels of varying bandwidth when surrounded by blockers and adjacent bands. The sample rate is programmable from 5 to 40 MHz, and aliases are suppressed by 45 dB or more. The noise and linearity performance of the filter is analyzed, and the effects of various imperfections such as transconductor finite output impedance, interchannel gain mismatch, and residual offsets in the channels are studied. Furthermore, it is proved that the filter is robust to the clock jitter. The 0.13- mum CMOS circuit consumes 6 mA from a 1.2-V supply.
Impulse radio: how it works Impulse radio, a form of ultra-wide bandwidth (UWB) spread-spectrum signaling, has properties that make it a viable candidate for short-range communications in dense multipath environments. This letter describes the characteristics of impulse radio using a modulation format that can be sup- ported by currently available impulse signal technology and gives analytical estimates of its multiple-access capability under ideal multiple-access channel conditions.
A 1.75-GHz polar modulated CMOS RF power amplifier for GSM-EDGE This work presents a fully integrated linearized CMOS RF amplifier, integrated in a 0.18-/spl mu/m CMOS process. The amplifier is implemented on a single chip, requiring no external matching or tuning networks. Peak output power is 27 dBm with a power-added efficiency (PAE) of 34%. The amplitude modulator, implemented on the same chip as the RF amplifier, modulates the supply voltage of the RF amp...
Spur Reduction Techniques for Phase-Locked Loops Exploiting A Sub-Sampling Phase Detector This paper presents phase-locked loop (PLL) reference-spur reduction design techniques exploiting a sub-sampling phase detector (SSPD) (which is also referred to as a sampling phase detector). The VCO is sampled by the reference clock without using a frequency divider and an amplitude controlled charge pump is used which is inherently insensitive to mismatch. The main remaining source of the VCO reference spur is the periodic disturbance of the VCO by the sampling at the reference frequency. The underlying VCO sampling spur mechanisms are analyzed and their effect is minimized by using dummy samplers and isolation buffers. A duty-cycle-controlled reference buffer and delay-locked loop (DLL) tuning are proposed to further reduce the worst case spur level. To demonstrate the effectiveness of the proposed spur reduction techniques, a 2.21 GHz PLL is designed and fabricated in 0.18 μm CMOS technology. While using a high loop-bandwidth-to-reference-frequency ratio of 1/20, the reference spur measured from 20 chips is <; -80 dBc. The PLL consumes 3.8 mW while the in-band phase noise is -121 dBc/Hz at 200 kHz and the output jitter integrated from 10 kHz to 100 MHz is 0.3psrms.
Cost Efficient Resource Management in Fog Computing Supported Medical Cyber-Physical System. With the recent development in information and communication technology, more and more smart devices penetrate into people&#39;s daily life to promote the life quality. As a growing healthcare trend, medical cyber-physical systems (MCPSs) enable seamless and intelligent interaction between the computational elements and the medical devices. To support MCPSs, cloud resources are usually explored to pro...
Parsec: A Parallel Simulation Environment for Complex Systems Design and development costs for extremely large systems could be significantly reduced if only there were efficient techniques for evaluating design alternatives and predicting their impact on overall system performance metrics. Due to the systems' analytical intractability, simulation is the most common performance evaluation technique for such systems. However, the long execution times needed for sequential simulation models often hampers evaluation. The slow speeds of sequential model execution have led to growing interest in the use of parallel execution for simulating large-scale systems. Widespread use of parallel simulation, however, has been significantly hindered by a lack of tools for integrating parallel model execution into the overall framework of system simulation. Another drawback to wide-spread use of simulations is the cost of model design and maintenance. The simulation environment the authors developed at UCLA attempts to address some of these issues. It consists of three primary components: a parallel simulation language called Parsec (parallel simulation environment for complex systems), its GUI, called Pave, and the portable runtime system that implements the simulation algorithms.
22.7-dB Gain <formula formulatype="inline"><tex Notation="TeX">$-$</tex></formula>19.7-dBm <formula formulatype="inline"><tex Notation="TeX">$ICP_{1{\rm dB}}$</tex></formula> UWB CMOS LNA A fully differential CMOS ultrawideband low-noise amplifier (LNA) is presented. The LNA has been realized in a standard 90-nm CMOS technology and consists of a common-gate stage and two subsequent common-source stages. The common-gate input stage realizes a wideband input impedance matching to the source impedance of the receiver (i.e., the antenna), whereas the two subsequent common-source stages...
A 15.5 dB, wide signal swing, dynamic amplifier using a common-mode voltage detection technique This paper presents a high-speed, low-power and wide signal swing differential dynamic amplifier using a common-mode voltage detection technique. The proposed dynamic amplifier achieves a 15.5 dB gain with less than 1 dB drop over a signal swing of 1.3 Vpp at an operating frequency of 1.5 GHz with a VDD of 1.2 V in 90 nm CMOS. The power consumption of the proposed circuit can be reduced linearly with operating frequency lowering.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.104085
0.052914
0.037079
0.009927
0.0024
0.001055
0.000422
0.000161
0.00001
0
0
0
0
0
Algorithmic Improvement and GPU Acceleration of the GenASM Algorithm We improve on GenASM, a recent algorithm for genomic sequence alignment, by significantly reducing its memory footprint and bandwidth requirement. Our algorithmic improvements reduce the memory footprint by 24 × and the number of memory accesses by 12 ×. We efficiently parallelize the algorithm for GPUs, achieving a 4.1 × speedup over a CPU implementation of the same algorithm, a 62× speedup over minimap2's CPU-based KSW2 and a 7.2 × speedup over the CPU-based Edlib for long reads.
SWIFOLD: Smith-Waterman implementation on FPGA with OpenCL for long DNA sequences. The results suggest that SWIFOLD can be a serious contender for accelerating the SW alignment of DNA sequences of unrestricted size in an affordable way reaching on average 125 GCUPS and almost a peak of 270 GCUPS.
GSWABE: faster GPU-accelerated sequence alignment with optimal alignment retrieval for short DNA sequences In this paper, we present GSWABE, a graphics processing unit GPU-accelerated pairwise sequence alignment algorithm for a collection of short DNA sequences. This algorithm supports all-to-all pairwise global, semi-global and local alignment, and retrieves optimal alignments on Compute Unified Device Architecture CUDA-enabled GPUs. All of the three alignment types are based on dynamic programming and share almost the same computational pattern. Thus, we have investigated a general tile-based approach to facilitating fast alignment by deeply exploring the powerful compute capability of CUDA-enabled GPUs. The performance of GSWABE has been evaluated on a Kepler-based Tesla K40 GPU using a variety of short DNA sequence datasets. The results show that our algorithm can yield a performance of up to 59.1 billions cell updates per second GCUPS, 58.5 GCUPS and 50.3 GCUPS for global, semi-global and local alignment, respectively. Furthermore, on the same system GSWABE runs up to 156.0 times faster than the Streaming SIMD Extensions SSE-based SSW library and up to 102.4 times faster than the CUDA-based MSA-CUDA the first stage in terms of local alignment. Compared with the CUDA-based gpu-pairAlign, GSWABE demonstrates stable and consistent speedups with a maximum speedup of 11.2, 10.7, and 10.6 for global, semi-global, and local alignment, respectively. Copyright © 2014 John Wiley & Sons, Ltd.
Emerging Trends in Design and Applications of Memory-Based Computing and Content-Addressable Memories Content-addressable memory (CAM) and associative memory (AM) are types of storage structures that allow searching by content as opposed to searching by address. Such memory structures are used in diverse applications ranging from branch prediction in a processor to complex pattern recognition. In this paper, we review the emerging challenges and opportunities in implementing different varieties of...
FindeR: Accelerating FM-Index-Based Exact Pattern Matching in Genomic Sequences through ReRAM Technology Genomics is the critical key to enabling precision medicine, ensuring global food security and enforcing wildlife conservation. The massive genomic data produced by various genome sequencing technologies presents a significant challenge for genome analysis. Because of errors from sequencing machines and genetic variations, approximate pattern matching (APM) is a must for practical genome analysis. Recent work proposes FPGA, ASIC and even process-in-memory-based accelerators to boost the APM throughput by accelerating dynamic-programming-based algorithms (e.g., Smith-Waterman). However, existing accelerators lack the efficient hardware acceleration for the exact pattern matching (EPM) that is an even more critical and essential function widely used in almost every step of genome analysis including assembly, alignment, annotation and compression. State-of-the-art genome analysis adopts the FM-Index that augments the space-efficient BWT with additional data structures permitting fast EPM operations. But the FM-Index is notorious for poor spatial locality and massive random memory accesses. In this paper, we propose a ReRAM-based process-in-memory architecture, FindeR, to enhance the FM-Index EPM search throughput in genomic sequences. We build a reliable and energy-efficient Hamming distance unit to accelerate the computing kernel of FM-Index search using commodity ReRAM chips without introducing extra CMOS logic. We further architect a full-fledged FM-Index search pipeline and improve its search throughput by lightweight scheduling on the NVDIMM. We also create a system library for programmers to invoke FindeR to perform EPMs in genome analysis. Compared to state-of-the-art accelerators, FindeR improves the FM-Index search throughput by 83% ~ 30K× and throughput per Watt by 3.5×~42.5K×.
SeGraM: a universal hardware accelerator for genomic sequence-to-graph and sequence-to-sequence mapping A critical step of genome sequence analysis is the mapping of sequenced DNA fragments (i.e., reads) collected from an individual to a known linear reference genome sequence (i.e., sequence-to-sequence mapping). Recent works replace the linear reference sequence with a graph-based representation of the reference genome, which captures the genetic variations and diversity across many individuals in a population. Mapping reads to the graph-based reference genome (i.e., sequence-to-graph mapping) results in notable quality improvements in genome analysis. Unfortunately, while sequence-to-sequence mapping is well studied with many available tools and accelerators, sequence-to-graph mapping is a more difficult computational problem, with a much smaller number of practical software tools currently available. We analyze two state-of-the-art sequence-to-graph mapping tools and reveal four key issues. We find that there is a pressing need to have a specialized, high-performance, scalable, and low-cost algorithm/hardware co-design that alleviates bottlenecks in both the seeding and alignment steps of sequence-to-graph mapping. Since sequence-to-sequence mapping can be treated as a special case of sequence-to-graph mapping, we aim to design an accelerator that is efficient for both linear and graph-based read mapping. To this end, we propose SeGraM, a universal algorithm/hardware co-designed genomic mapping accelerator that can effectively and efficiently support both <u>se</u>quence-to-<u>gra</u>ph <u>m</u>apping and sequence-to-sequence mapping, for both short and long reads. To our knowledge, SeGraM is the first algorithm/hardware co-design for accelerating sequence-to-graph mapping. SeGraM consists of two main components: (1) MinSeed, the first <u>min</u>imizer-based <u>seed</u>ing accelerator, which finds the candidate locations in a given genome graph; and (2) BitAlign, the first <u>bit</u>vector-based sequence-to-graph <u>align</u>ment accelerator, which performs alignment between a given read and the subgraph identified by MinSeed. We couple SeGraM with high-bandwidth memory to exploit low latency and highly-parallel memory access, which alleviates the memory bottleneck. We demonstrate that SeGraM provides significant improvements for multiple steps of the sequence-to-graph (i.e., S2G) and sequence-to-sequence (i.e., S2S) mapping pipelines. First, SeGraM outperforms state-of-the-art S2G mapping tools by 5.9×/3.9× and 106×/- 742× for long and short reads, respectively, while reducing power consumption by 4.1×/4.4× and 3.0×/3.2×. Second, BitAlign outperforms a state-of-the-art S2G alignment tool by 41×-539× and three S2S alignment accelerators by 1.2×-4.8×. We conclude that SeGraM is a high-performance and low-cost universal genomics mapping accelerator that efficiently supports both sequence-to-graph and sequence-to-sequence mapping pipelines.
An FPGA Implementation of A Portable DNA Sequencing Device Based on RISC-V Miniature and mobile DNA sequencers are steadily growing in popularity as effective tools for genetics research. As basecalling algorithms continue to evolve, basecalling poses a serious challenge for small computing devices despite its increasing accuracy. Although general-purpose computing chips such as CPUs and GPUs can achieve fast results, they are not energy efficient enough for mobile applications. This paper presents an innovative solution, a basecalling hardware architecture based on RISC-V ISA, and after validation with our custom FPGA verification platform, it demonstrates a 1.95x energy efficiency ratio compared to x86. There is also a 38% improvement in energy efficiency ratio compared to ARM. In addition, this study also completes the verification work for subsequent ASIC designs.
Accelerating read mapping with FastHASH. With the introduction of next-generation sequencing (NGS) technologies, we are facing an exponential increase in the amount of genomic sequence data. The success of all medical and genetic applications of next-generation sequencing critically depends on the existence of computational techniques that can process and analyze the enormous amount of sequence data quickly and accurately. Unfortunately, the current read mapping algorithms have difficulties in coping with the massive amounts of data generated by NGS.We propose a new algorithm, FastHASH, which drastically improves the performance of the seed-and-extend type hash table based read mapping algorithms, while maintaining the high sensitivity and comprehensiveness of such methods. FastHASH is a generic algorithm compatible with all seed-and-extend class read mapping algorithms. It introduces two main techniques, namely Adjacency Filtering, and Cheap K-mer Selection.We implemented FastHASH and merged it into the codebase of the popular read mapping program, mrFAST. Depending on the edit distance cutoffs, we observed up to 19-fold speedup while still maintaining 100% sensitivity and high comprehensiveness.
A Linear Representation of Dynamics of Boolean Networks A new matrix product, called semi-tensor product of matrices, is reviewed. Using it, a matrix expression of logic is proposed, where a logical variable is expressed as a vector, a logical function is expressed as a multiple linear mapping. Under this framework, a Boolean network equation is converted into an equivalent algebraic form as a conventional discrete-time linear system. Analyzing the transition matrix of the linear system, formulas are obtained to show a) the number of fixed points; b) the numbers of cycles of different lengths; c) transient period, for all points to enter the set of attractors; and d) basin of each attractor. The corresponding algorithms are developed and used to some examples.
The Transitive Reduction of a Directed Graph
A new concept for wireless reconfigurable receivers In this article we present the Self-Adaptive Universal Receiver (SAUR), a novel wireless reconfigurable receiver architecture. This scheme is based on blind recognition of the system in use, operating on a new radio interface comprising two functional phases. The first phase performs a wideband analysis (WBA) on the received signal to determine its standard. The second phase corresponds to demodulation. Here we only focus on the WBA phase, which consists of an iterative process to find the bandwidth compatible with the associated signal processing techniques. The blind standard recognition performed in the last iteration step of this process uses radial basis function neural networks. This allows a strong analogy between our approach and conventional pattern recognition problems. The efficiency of this type of blind recognition is illustrated with the results of extensive simulations performed in our laboratory using true data of received signals.
Fpga Implementation Of High-Frequency Software Radio Receiver State-of-the-art analog-to-digital converters allow the design of high-frequency software radio receivers that use baseband signal processing. However, such receivers are rarely considered in literature. In this paper, we describe the design of a high-performance receiver operating at high frequencies, whose digital part is entirely implemented in an FPGA device. The design of digital subsystem is given, together with the design of a low-cost analog front end.
A Hybrid Dynamic Load Balancing Algorithm For Distributed Systems Using Genetic Algorithms Dynamic Load Balancing (DLB) is sine qua non in modern distributed systems to ensure the efficient utilization of computing resources therein. This paper proposes a novel framework for hybrid dynamic load balancing. Its framework uses a Genetic Algorithms (GA) based supernode selection approach within. The GA-based approach is useful in choosing optimally loaded nodes as the supernodes directly from data set, thereby essentially improving the speed of load balancing process. Applying the proposed GA-based approach, this work analyzes the performance of hybrid DLB algorithm under different system states such as lightly loaded, moderately loaded, and highly loaded. The performance is measured with respect to three parameters: average response time, average round trip time, and average completion time of the users. Further, it also evaluates the performance of hybrid algorithm utilizing OnLine Transaction Processing (OLTP) benchmark and Sparse Matrix Vector Multiplication (SPMV) benchmark applications to analyze its adaptability to I/O-intensive, memory-intensive, or/and CPU-intensive applications. The experimental results show that the hybrid algorithm significantly improves the performance under different system states and under a wide range of workloads compared to traditional decentralized algorithm.
OMNI: A Framework for Integrating Hardware and Software Optimizations for Sparse CNNs Convolution neural networks (CNNs) as one of today’s main flavor of deep learning techniques dominate in various image recognition tasks. As the model size of modern CNNs continues to grow, neural network compression techniques have been proposed to prune the redundant neurons and synapses. However, prior techniques disconnect the software neural networks compression and hardware acceleration, whi...
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0.05
0
0
0
0
0
0
Replica compensated linear regulators for supply-regulated phase-locked loops Supply-regulated phase-locked loops rely upon the VCO voltage regulator to maintain a low sensitivity to supply noise and hence low overall jitter. By analyzing regulator supply rejection, we show that in order to simultaneously meet the bandwidth and low dropout requirements, previous regulator implementations used in supply-regulated PLLs suffer from unfavorable tradeoffs between power supply rejection and power consumption. We therefore propose a compensation technique that places the regulator's amplifier in a local replica feedback loop, stabilizing the regulator by increasing the amplifier bandwidth while lowering its gain. Even though the forward gain of the amplifier is reduced, supply noise affects the replica output in addition to the actual output, and therefore the amplifier's gain to reject supply noise is effectively restored. Analysis shows that for reasonable mismatch between the replica and actual loads, regulator performance is uncompromised, and experimental results from a 90 nm SOI test chip confirm that with the same power consumption, the proposed regulator achieves at least 4 dB higher supply rejection than the previous regulator design. Furthermore, simulations show that if not for other supply rejection-limiting components in the PLL, the supply rejection improvement of the proposed regulator is greater than 15 dB.
40 Gb/s Transimpedance-AGC Amplifier and CDR Circuit for Broadband Data Receivers in 90 nm CMOS High-speed front-end amplifiers and CDR circuits play critical roles in broadband data receivers as the former needs to perform amplification at high data rate and the latter has to retime the data with the extracted low-jitter clock. In this paper, the design and experimental results of 40 Gb/s transimpedance-AGC amplifier and CDR circuit are described. The transimpedance amplifier incorporates reversed triple-resonance networks (RTRNs) and negative feedback in a common-gate configuration. A mathematical model is derived to facilitate the design and analysis of the RTRN, showing that the bandwidth is extended by a larger factor compared to using the shunt-series peaking technique, especially in cases when the parasitic capacitance is dominated by the next stage. Operating at 40 Gb/s, the amplifier provides an overall gain of 2 kOmega and a differential output swing of 520 mVpp with for input spanning from to . The measured integrated input-referred noise is 3.3muArms. The half-rate CDR circuit employs a direction-determined rotary-wave quadrature VCO to solve the bidirectional-rotation problem in conventional rotary-wave oscillators. This guarantees the phase sequence while negligibly affecting the phase noise. With 40 Gb/s 231 - 1 PRBS input, the recovered clock jitter is and 0.7psrms. The retimed data exhibits 13.3 pspp jitter with BER . Fabricated in 90 nm digital CMOS technology, the overall amplifier consumes 75 mW and the CDR circuit consumes 48 mW excluding the output buffers, all from a 1.2 V supply.
Verifying global start-up for a Möbius ring-oscillator This paper presents the formal verification of start-up for a differential ring-oscillator circuit used in industrial designs. We present an efficient algorithm for finding DC equilibria to establish a condition that ensure the oscillator is free from lock-up. Further, we present a formal verification solution for the problem. Using dynamical systems theory, we show that any oscillator must have a non-empty set of states from which it fails to start properly. However, it is possible to show that these failures only occur with zero probability. To do so, this paper generalizes the "cone argument" initially presented in (Mitchell and Greenstreet, in Proceedings of the third workshop on designing correct circuits, 1996 ) and proves the soundness of this generalization. This paper also shows how concepts from analog design such as differential operation can be soundly incorporated into the verification to produce simpler models and reduce the complexity of the verification task.
Design of high-speed wireline transceivers for backplane communications in 28nm CMOS This paper describes the design of the architecture and circuit blocks for backplane communication transceivers. A channel study investigates the major challenges in the design of high-speed reconfigurable transceivers. Architectural solutions resolving channel-induced signal distortions are proposed and their effectiveness on various channels is investigated. Subsequently, the paper describes the design of a 0.6-13.1Gb/s fully-adaptive backplane transceiver embedded in state-of-the-art low-leakage 28nm CMOS FPGAs. The receiver front-end utilizes a 3-stage CTLE, a 7-tap speculative DFE, and a 4-tap sliding DFE to remove the immediate post-cursor ISI up to 64 taps. The clocking network provides continuous operation range between 0.6-13.1Gb/s. The transceiver achieves BER <; 10-15 over a 31dB-loss backplane at 13.1Gb/s and over channels with 10GBASE-KR characteristics at 10.3125Gb/s.
A 28-Gb/s 4-Tap FFE/15-Tap DFE Serial Link Transceiver in 32-nm SOI CMOS Technology. This paper presents a 28-Gb/s transceiver in 32-nm SOI CMOS technology for chip-to-chip communications over high-loss electrical channels such as backplanes. The equalization needed for such applications is provided by a 4-tap baud-spaced feed-forward equalizer (FFE) in the transmitter and a two-stage peaking amplifier and 15-tap decision-feedback equalizer (DFE) in the receiver. The transmitter e...
Fully Digital Transmit Equalizer With Dynamic Impedance Modulation. This paper analyzes the energy efficiency of different transmit equalizer driver topologies. Dynamic impedance modulation is found to be the most energy-efficient mechanism for transmit pre-emphasis, when compared with impedance-maintaining current and voltage-mode drivers. The equalizing transmitter is implemented as a digital push-pull impedance-modulating (RM) driver with fully digital RAM-DAC ...
A Reference-Less Clock and Data Recovery Circuit Using Phase-Rotating Phase-Locked Loop A reference-less half-rate digital clock and data recovery (CDR) circuit employing a phase-rotating phase-locked loop (PRPLL) as phase interpolator is presented. By implementing the proportional control in phase domain within the PRPLL, the proposed CDR decouples jitter transfer (JTRAN) bandwidth from jitter tolerance (JTOL) corner frequency, eliminates jitter peaking, and removes JTRAN dependence on bang-bang phase detector gain. Fabricated in a 90 nm CMOS process, the prototype CDR achieves error-free operation (BER <; 10-12) with PRBS data sequences ranging from PRBS7 to PRBS31. At 5 Gb/s, it consumes 13.1 mW power and achieves a recovered clock long-term jitter of 5.0 ps rms/44.0 ps pp when operating with PRBS31 input data. The measured JTRAN bandwidth is 2 MHz and JTOL corner frequency is 16 MHz. The CDR is tolerant to 110 mV pp of sinusoidal noise on the DCO supply voltage at the worst case noise frequency of 7 MHz. At 2.5 GHz, the PRPLL consumes 2.9 mW and achieves -134 dBc/Hz phase noise at 1 MHz frequency offset. The differential and integral non-linearity of its digital-to-phase transfer characteristic are within ±0.2 LSB and ±0.4 LSB, respectively.
A Sub-0.25-pJ/bit 47.6-to-58.8-Gb/s Reference-Less FD-Less Single-Loop PAM-4 Bang-Bang CDR With a Deliberate-Current-Mismatch Frequency Acquisition Technique in 28-nm CMOS This article reports a half-rate single-loop bang-bang clock and data recovery (BBCDR) circuit without the need of reference and frequency detector (FD). Specifically, we propose a deliberate-current-mismatch charge-pump pair to enable fast and robust frequency acquisition without identifying the frequency error polarity. This technique eliminates the need for a complex high-speed data or clock pa...
Phase averaging and interpolation using resistor strings or resistor rings for multi-phase clock generation Circuit techniques using resistor strings (R-strings) and resistor rings (R-rings) for phase averaging and interpolation are described. Phase averaging can reduce phase errors, and phase interpolation can increase the number of available phases. In addition to the waveform shape, the averaging and the interpolation performances of the R-strings and R-rings are determined by the clock frequency normalized by a RC time constant of the circuits. To attain better phase accuracy, a smaller RC time constant is required, but at the expense of larger power dissipation. To demonstrate the resistor ring's capability of phase averaging and interpolation, a 125-MHz 8-bit digital-to-phase converter (DPC) was designed and fabricated using a standard 0.35-μm SPQM CMOS technology. Measurement results show that the DPC attains 8-bit resolution using the proposed phase averaging and interpolation technique.
Differential Power Analysis . Cryptosystem designers frequently assume that secrets willbe manipulated in closed, reliable computing environments. Unfortunately,actual computers and microchips leak information about the operationsthey process. This paper examines specific methods for analyzingpower consumption measurements to find secret keys from tamperresistant devices. We also discuss approaches for building cryptosystemsthat can operate securely in existing hardware that leaks information.Keywords:...
Difference engine: harnessing memory redundancy in virtual machines Virtual machine monitors (VMMs) are a popular platform for Internet hosting centers and cloud-based compute services. By multiplexing hardware resources among virtual machines (VMs) running commodity operating systems, VMMs decrease both the capital outlay and management overhead of hosting centers. Appropriate placement and migration policies can take advantage of statistical multiplexing to effectively utilize available processors. However, main memory is not amenable to such multiplexing and is often the primary bottleneck in achieving higher degrees of consolidation. Previous efforts have shown that content-based page sharing provides modest decreases in the memory footprint of VMs running similar operating systems and applications. Our studies show that significant additional gains can be had by leveraging both subpage level sharing (through page patching) and incore memory compression. We build Difference Engine, an extension to the Xen VMM, to support each of these---in addition to standard copy-on-write full-page sharing---and demonstrate substantial savings across VMs running disparate workloads (up to 65%). In head-to-head memory-savings comparisons, Difference Engine outperforms VMware ESX server by a factor 1.6--2.5 for heterogeneous workloads. In all cases, the performance overhead of Difference Engine is less than 7%.
Optimal regional consecutive leader election in mobile ad-hoc networks The regional consecutive leader election (RCLE) problem requires mobile nodes to elect a leader within bounded time upon entering a specific region. We prove that any algorithm requires Ω(Dn) rounds for leader election, where D is the diameter of the network and n is the total number of nodes. We then present a fault-tolerant distributed algorithm that solves the RCLE problem and works even in settings where nodes do not have access to synchronized clocks. Since nodes set their leader variable within O(Dn) rounds, our algorithm is asymptotically optimal with respect to time complexity. Due to its low message bit complexity, we believe that our algorithm is of practical interest for mobile wireless ad-hoc networks. Finally, we present a novel and intuitive constraint on mobility that guarantees a bounded communication diameter among nodes within the region of interest.
High Frequency Buck Converter Design Using Time-Based Control Techniques Time-based control techniques for the design of high switching frequency buck converters are presented. Using time as the processing variable, the proposed controller operates with CMOS-level digital-like signals but without adding any quantization error. A ring oscillator is used as an integrator in place of conventional opamp-RC or G m-C integrators while a delay line is used to perform voltage to time conversion and to sum time signals. A simple flip-flop generates pulse-width modulated signal from the time-based output of the controller. Hence time-based control eliminates the need for wide bandwidth error amplifier, pulse-width modulator (PWM) in analog controllers or high resolution analog-to-digital converter (ADC) and digital PWM in digital controllers. As a result, it can be implemented in small area and with minimal power. Fabricated in a 180 nm CMOS process, the prototype buck converter occupies an active area of 0.24 mm2, of which the controller occupies only 0.0375 mm2. It operates over a wide range of switching frequencies (10-25 MHz) and regulates output to any desired voltage in the range of 0.6 V to 1.5 V with 1.8 V input voltage. With a 500 mA step in the load current, the settling time is less than 3.5 μs and the measured reference tracking bandwidth is about 1 MHz. Better than 94% peak efficiency is achieved while consuming a quiescent current of only 2 μA/MHz.
An Event-Driven Quasi-Level-Crossing Delta Modulator Based on Residue Quantization This article introduces a digitally intensive event-driven quasi-level-crossing (quasi-LC) delta-modulator analog-to-digital converter (ADC) with adaptive resolution (AR) for Internet of Things (IoT) wireless networks, in which minimizing the average sampling rate for sparse input signals can significantly reduce the power consumed in data transmission, processing, and storage. The proposed AR quasi-LC delta modulator quantizes the residue voltage signal with a 4-bit asynchronous successive-approximation-register (SAR) sub-ADC, which enables a straightforward implementation of LC and AR algorithms in the digital domain. The proposed modulator achieves data compression by means of a globally signal-dependent average sampling rate and achieves AR through a digital multi-level comparison window that overcomes the tradeoff between the dynamic range and the input bandwidth in the conventional LC ADCs. Engaging the AR algorithm reduces the average sampling rate by a factor of 3 at the edge of the modulator’s signal bandwidth. The proposed modulator is fabricated in 28-nm CMOS and achieves a peak SNDR of 53 dB over a signal bandwidth of 1.42 MHz while consuming 205 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> and an active area of 0.0126 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> .
1.03549
0.023756
0.023309
0.016807
0.011717
0.004889
0.00195
0.000889
0.000017
0
0
0
0
0
Wavelet Denoising of TSD Deflection Slope Measurements for Improved Pavement Structural Evaluation. Continuous deflection devices (CDDs) can safely measure pavement deflection (or other related properties) while traveling at highway speed, which reduces traffic disruption. CDD measurements are contaminated with relatively high noise levels compared to stop-and-go devices such as the Falling Weight Deflectometer. In this article, we use wavelet transform denoising to remove the noise and estimate the true deflection slope measurements obtained from the Traffic Speed Deflectometer. Results show that failure to denoise deflection slope measurements can lead to calculated Effective Structural Number values that are highly variable (unstable). Attempting to filter these highly variable measurements can lead to erroneous results. We also use wavelet transform denoising to identify localized weak spots such as those that are caused by pavement reflection cracking. Identifying weak spots with wavelets is possible because wavelets are spatially adaptive to local features. In contrast, a linear filter is not capable of adapting to local features.
Identification of Instantaneous Modal Parameter of Time-Varying Systems via a Wavelet-Based Approach and Its Application. This work presents an efficient approach using time-varying autoregressive with exogenous input (TVARX) model and a substructure technique to identify the instantaneous modal parameters of a linear time-varying structure and its substructures. The identified instantaneous natural frequencies can be used to identify earthquake damage to a building, including the specific floors that are damaged. An appropriate TVARX model of the dynamic responses of a structure or substructure is established using a basis function expansion and regression approach combined with continuous wavelet transform. The effectiveness of the proposed approach is validated using numerically simulated earthquake responses of a five-storey shear building with time-varying stiffness and damping coefficients. In terms of accuracy in determining the instantaneous modal parameters of a structure from noisy responses, the proposed approach is superior to typical basis function expansion and regression approach. The proposed method is further applied to process the dynamic responses of an eight-storey steel frame in shaking table tests to identify its instantaneous modal parameters and to locate the storeys whose columns yielded under a strong base excitation.
The Super-Turing Computational Power Of Plastic Recurrent Neural Networks We study the computational capabilities of a biologically inspired neural model where the synaptic weights, the connectivity pattern, and the number of neurons can evolve over time rather than stay static. Our study focuses on the mere concept of plasticity of the model so that the nature of the updates is assumed to be not constrained. In this context, we show that the so-called plastic recurrent neural networks (RNNs) are capable of the precise super-Turing computational power - as the static analog neural networks - irrespective of whether their synaptic weights are modeled by rational or real numbers, and moreover, irrespective of whether their patterns of plasticity are restricted to bi-valued updates or expressed by any other more general form of updating. Consequently, the incorporation of only bi-valued plastic capabilities in a basic model of RNNs suffices to break the Turing barrier and achieve the super-Turing level of computation. The consideration of more general mechanisms of architectural plasticity or of real synaptic weights does not further increase the capabilities of the networks. These results support the claim that the general mechanism of plasticity is crucially involved in the computational and dynamical capabilities of biological neural networks. They further show that the super-Turing level of computation reflects in a suitable way the capabilities of brain-like models of computation.
Optimal Tracking Control of Motion Systems Tracking control of motion systems typically requires accurate nonlinear friction models, especially at low speeds, and integral action. However, building accurate nonlinear friction models is time consuming, friction characteristics dramatically change over time, and special care must be taken to avoid windup in a controller employing integral action. In this paper a new approach is proposed for the optimal tracking control of motion systems with significant disturbances, parameter variations, and unmodeled dynamics. The ‘desired’ control signal that will keep the nominal system on the desired trajectory is calculated based on the known system dynamics and is utilized in a performance index to design an optimal controller. However, in the presence of disturbances, parameter variations, and unmodeled dynamics, the desired control signal must be adjusted. This is accomplished by using neural network based observers to identify these quantities, and update the control signal on-line. This formulation allows for excellent motion tracking without the need for the addition of an integral state. The system stability is analyzed and Lyapunov based weight update rules are applied to the neural networks to guarantee the boundedness of the tracking error, disturbance estimation error, and neural network weight errors. Experiments are conducted on the linear axes of a mini CNC machine for the contour control of two orthogonal axes, and the results demonstrate the excellent performance of the proposed methodology.
Adaptive Failure Compensation Control for Uncertain Systems With Stochastic Actuator Failures. In this technical note, an adaptive failure compensation problem has been studied for a class of nonlinear uncertain systems subject to stochastic actuator failures and unknown parameters. The stochastic functions related to Markovian variables have been introduced to denote the failure scaling factors for each actuators which is much more practical and challenging. Firstly, by taking into account of the Markovian variables existing in the system, some preliminary knowledges have been established. Then, by employing backstepping strategy, an adaptive failure compensation control scheme has been proposed, which ensures the boundedness in probability of all the closed-loop signals in the presence of stochastic actuator failures. A simulation example is presented to show the effectiveness of the proposed scheme.
Network-Decentralized Control Strategies for Stabilization We consider the problem of stabilizing a class of systems formed by a set of decoupled subsystems (nodes) interconnected through a set of controllers (arcs). Controllers are network-decentralized, i.e., they use information exclusively from the nodes they interconnect. This condition requires a block-structured feedback matrix, having the same structure as the transpose of the overall input matrix of the system. If the subsystems do not have common unstable eigenvalues, we demonstrate that the problem is solvable. In the general case, we provide sufficient conditions for solvability. When subsystems are identical and each input agent controls a pair of subsystems with input matrices having opposite sign (flow networks), we prove that stabilization is possible if and only if the system is connected with the external environment. Our proofs are constructive and lead to structured linear matrix inequalities (LMIs).
Wireless sensing and vibration control with increased redundancy and robustness design. Control systems with long distance sensor and actuator wiring have the problem of high system cost and increased sensor noise. Wireless sensor network (WSN)-based control systems are an alternative solution involving lower setup and maintenance costs and reduced sensor noise. However, WSN-based control systems also encounter problems such as possible data loss, irregular sampling periods (due to the uncertainty of the wireless channel), and the possibility of sensor breakdown (due to the increased complexity of the overall control system). In this paper, a wireless microcontroller-based control system is designed and implemented to wirelessly perform vibration control. The wireless microcontroller-based system is quite different from regular control systems due to its limited speed and computational power. Hardware, software, and control algorithm design are described in detail to demonstrate this prototype. Model and system state compensation is used in the wireless control system to solve the problems of data loss and sensor breakdown. A positive position feedback controller is used as the control law for the task of active vibration suppression. Both wired and wireless controllers are implemented. The results show that the WSN-based control system can be successfully used to suppress the vibration and produces resilient results in the presence of sensor failure.
Energy-regenerative model predictive control This paper presents some solution approaches to the problem of optimal energy-regenerative model predictive control for linear systems subject to stability and/or dissipativity constraints, as well as hard constraints on the state and control vectors. The problem is generally non-convex in the objective and some of the constraints, thereby resulting in a non-convex optimization problem to be solved at each time step. Multiple extended convex relaxation approaches are considered. As a result, a more conservative semi-definite programming problem is proposed to be solved at each time step. The feasibility and stability of the resulting closed-loop system are also examined. The approaches are validated using a numerical example of maximizing energy regeneration from a single degree of freedom vibrating system subject to a level-set constraint on some performance metric characterizing the quality of vibration isolation achieved by the system. The constraint is described in terms of an upper bound on the L2-gain of the system from the input to a vector of appropriately selected system outputs.
Achievable rates in cognitive radio channels Cognitive radio promises a low-cost, highly flexible alternative to the classic single-frequency band, single-protocol wireless device. By sensing and adapting to its environment, such a device is able to fill voids in the wireless spectrum and can dramatically increase spectral efficiency. In this paper, the cognitive radio channel is defined as a two-sender, two-receiver interference channel in which sender 2 obtains the encoded message sender 1 plans to transmit. We consider two cases: in the genie-aided cognitive radio channel, sender 2 is noncausally presented the data to be transmitted by sender 1 while in the causal cognitive radio channel, the data is obtained causally. The cognitive radio at sender 2 may then choose to transmit simultaneously over the same channel, as opposed to waiting for an idle channel as is traditional for a cognitive radio. Our main result is the development of an achievable region which combines Gel'fand-Pinkser coding with an achievable region construction for the interference channel. In the additive Gaussian noise case, this resembles dirty-paper coding, a technique used in the computation of the capacity of the Gaussian multiple-input multiple-output (MIMO) broadcast channel. Numerical evaluation of the region in the Gaussian noise case is performed, and compared to an inner bound, the interference channel, and an outer bound, a modified Gaussian MIMO broadcast channel. Results are also extended to the case in which the message is causally obtained.
The price of validity in dynamic networks Massive-scale self-administered networks like Peer-to-Peer and Sensor Networks have data distributed across thousands of participant hosts. These networks are highly dynamic with short-lived hosts being the norm rather than an exception. In recent years, researchers have investigated best-effort algorithms to efficiently process aggregate queries (e.g., sum, count, average, minimum and maximum) [6, 13, 21, 34, 35, 37] on these networks. Unfortunately, query semantics for best-effort algorithms are ill-defined, making it hard to reason about guarantees associated with the result returned. In this paper, we specify a correctness condition, single-site validity, with respect to which the above algorithms are best-effort. We present a class of algorithms that guarantee validity in dynamic networks. Experiments on real-life and synthetic network topologies validate performance of our algorithms, revealing the hitherto unknown price of validity.
Power Amplifier Selection for LINC Applications. Linear amplification with nonlinear components (LINC) using a nonisolating combiner has the potential for high efficiency and good linearity. In past work, the interaction between two power amplifiers has been interpreted as a time-varying load presented at the output of amplifiers, and the linearity and efficiency of the LINC system has been evaluated according to how the power amplifiers respond...
An Electro-Magnetic Energy Harvesting System With 190 nW Idle Mode Power Consumption for a BAW Based Wireless Sensor Node. State-of-the-art wireless sensor nodes are mostly supplied by batteries. Such systems have the disadvantage that they are not maintenance free because of the limited lifetime of batteries. Instead, wireless sensor nodes or related devices can be remotely powered. To increase the operating range and applicability of these remotely powered devices an electro-magnetic energy harvester is developed in a 0.13 mu m low cost CMOS technology. This paper presents an energy harvesting system that converts RF power to DC power to supply wireless sensor nodes, active transmitters or related systems with a power consumption up to the mW range. This energy harvesting system is used to power a wireless sensor node from the 900 MHz RF field. The wireless sensor node includes an on-chip temperature sensor and a bulk acoustic wave (BAW) based transmitter. The BAW resonator reduces the startup time of the transmitter to about 2 mu s which reduces the amount of energy needed in one transmission cycle. The maximum output power of the transmitter is 5.4 dBm. The chip contains an ultra-low-power control unit and consumes only 190 nW in idle mode. The required input power is -19.7 dBm.
A 10-Bit 800-MHz 19-mW CMOS ADC A pipelined ADC employs charge-steering op amps to relax the trade-offs among speed, noise, and power consumption. Applying full-rate nonlinearity and gain error calibration, a prototype realized in 65-nm CMOS technology achieves an SNDR of 52.2 dB at an input frequency of 399.2MHz and an FoM of 53 fJ/conversion-step.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.076981
0.078
0.073028
0.071111
0.071111
0.071111
0.071111
0.035556
0
0
0
0
0
0
Mathematical Analysis of a Prime Modulus Quantizer MASH Digital Delta–Sigma Modulator A MASH digital delta-sigma modulator (DDSM) is analyzed mathematically. It incorporates first-order error feedback modulators (EFM) which include prime modulus quantizers to guarantee a minimum sequence length M. The purpose of this analysis is to calculate the exact sequence length of the aforementioned MASH DDSM. We show that the sequence length for an lth-order member of this modulator family M...
Spur-free MASH delta-sigma modulation For multistage noise-shaping (MASH) delta-sigma modulation, this paper presents a new structure that is free of spurs for all input values. The proposed MASH structure cascades several first-order delta-sigma modulators (DSMs) like the traditional MASH structure but has an additional feedforward connection between two adjacent stages. The proposed MASH structure can increase the sequence length and thus reduce spurs. The reason why the proposed MASH structure has a long sequence length for the full input range is mathematically proved, and simulations are performed to verify the effect of the long sequence length. Simulation results show that the performance of the proposed MASH structure is close to that of the ideal DSM. In addition, the proposed MASH structure requires almost the same hardware cost as the traditional MASH structure.
Spurious tones in digital delta-sigma modulators resulting from pseudorandom dither Digital delta-sigma modulators (DDSMs) are finite state machines; their spectra are characterized by strong periodic tones (so-called spurs) when they cycle repeatedly in time through a small number of states. This happens when the input is constant or periodic. Pseudorandom dither generators are widely used to break up periodic cycles in DDSMs in order to eliminate spurs produced by underlying periodic behavior. Unfortunately, pseudorandom dither signals are themselves periodic and therefore can have limited effectiveness. This paper addresses the fundamental limitations of using pseudorandom dither signals that are inherently periodic. We clarify some common misunderstandings in the DDSM literature. We present rigorous mathematical analysis, case studies to illustrate the issues, and insights which can prove useful in design.
Spurious tones in digital delta sigma modulators with pseudorandom dither Pseudorandom dither generators are widely used to break up periodic cycles in digital delta sigma modulators in order to minimize spurious tones produced by underlying periodic behavior. Unfortunately, pseudorandom dither signals are themselves periodic and therefore can have limited effectiveness. This paper identifies some limitations of using pseudorandom dither signals that are inherently periodic.
Digital PLLs: the modern timing reference for radar and communication systems Digital PLLs are nowadays recognized as a viable approach for the design of high-performance frequency synthesizers in scaled CMOS technologies. Latest implementations allow achieving at low power both state-of-the-art rms jitter, between 50fs and 100fs, and highly linear fast frequency modulation capability, thus enabling both high-efficiency communications systems and radar applications in CMOS....
On the Mechanisms Governing Spurious Tone Injection in Fractional PLLs. In fractional phase-locked loop driven by ΣΔ modulators there can be spurious tones in the power spectral density (PSD) of output signals even if the PSDs of the sequences used to drive the frequency divider are spur-free. This is due to undesirable nonlinear effects notably occurring in the charge pump (CP). In this brief, we focus on static and dynamic mismatch of the CP and its interaction with...
Spurious Tone Suppression Techniques Applied to a Wide-Bandwidth 2.4 GHz Fractional- N PLL This paper demonstrates that spurious tones in the output of a fractional-N PLL can be reduced by replacing the DeltaSigma modulator with a new type of digital quantizer and adding a charge pump offset combined with a sampled loop filter. It describes the underlying mechanisms of the spurious tones, proposes techniques that mitigate the effects of the mechanisms, and presents a phase noise cancell...
Cognitive radio: brain-empowered wireless communications Cognitive radio is viewed as a novel approach for improving the utilization of a precious natural resource: the radio electromagnetic spectrum. The cognitive radio, built on a software-defined radio, is defined as an intelligent wireless communication system that is aware of its environment and uses the methodology of understanding-by-building to learn from the environment and adapt to statistical variations in the input stimuli, with two primary objectives in mind: · highly reliable communication whenever and wherever needed; · efficient utilization of the radio spectrum. Following the discussion of interference temperature as a new metric for the quantification and management of interference, the paper addresses three fundamental cognitive tasks. 1) Radio-scene analysis. 2) Channel-state estimation and predictive modeling. 3) Transmit-power control and dynamic spectrum management. This work also discusses the emergent behavior of cognitive radio.
The gem5 simulator The gem5 simulation infrastructure is the merger of the best aspects of the M5 [4] and GEMS [9] simulators. M5 provides a highly configurable simulation framework, multiple ISAs, and diverse CPU models. GEMS complements these features with a detailed and exible memory system, including support for multiple cache coherence protocols and interconnect models. Currently, gem5 supports most commercial ISAs (ARM, ALPHA, MIPS, Power, SPARC, and x86), including booting Linux on three of them (ARM, ALPHA, and x86). The project is the result of the combined efforts of many academic and industrial institutions, including AMD, ARM, HP, MIPS, Princeton, MIT, and the Universities of Michigan, Texas, and Wisconsin. Over the past ten years, M5 and GEMS have been used in hundreds of publications and have been downloaded tens of thousands of times. The high level of collaboration on the gem5 project, combined with the previous success of the component parts and a liberal BSD-like license, make gem5 a valuable full-system simulation tool.
On the evolution of user interaction in Facebook Online social networks have become extremely popular; numerous sites allow users to interact and share content using social links. Users of these networks often establish hundreds to even thousands of social links with other users. Recently, researchers have suggested examining the activity network - a network that is based on the actual interaction between users, rather than mere friendship - to distinguish between strong and weak links. While initial studies have led to insights on how an activity network is structurally different from the social network itself, a natural and important aspect of the activity network has been disregarded: the fact that over time social links can grow stronger or weaker. In this paper, we study the evolution of activity between users in the Facebook social network to capture this notion. We find that links in the activity network tend to come and go rapidly over time, and the strength of ties exhibits a general decreasing trend of activity as the social network link ages. For example, only 30% of Facebook user pairs interact consistently from one month to the next. Interestingly, we also find that even though the links of the activity network change rapidly over time, many graph-theoretic properties of the activity network remain unchanged.
An artificial neural network (p,d,q) model for timeseries forecasting Artificial neural networks (ANNs) are flexible computing frameworks and universal approximators that can be applied to a wide range of time series forecasting problems with a high degree of accuracy. However, despite all advantages cited for artificial neural networks, their performance for some real time series is not satisfactory. Improving forecasting especially time series forecasting accuracy is an important yet often difficult task facing forecasters. Both theoretical and empirical findings have indicated that integration of different models can be an effective way of improving upon their predictive performance, especially when the models in the ensemble are quite different. In this paper, a novel hybrid model of artificial neural networks is proposed using auto-regressive integrated moving average (ARIMA) models in order to yield a more accurate forecasting model than artificial neural networks. The empirical results with three well-known real data sets indicate that the proposed model can be an effective way to improve forecasting accuracy achieved by artificial neural networks. Therefore, it can be used as an appropriate alternative model for forecasting task, especially when higher forecasting accuracy is needed.
Fully Integrated CMOS Power Amplifier With Efficiency Enhancement at Power Back-Off This paper presents a new approach for power amplifier design using deep submicron CMOS technologies. A transformer based voltage combiner is proposed to combine power generated from several low-voltage CMOS amplifiers. Unlike other voltage combining transformers, the architecture presented in this paper provides greater flexibility to access and control the individual amplifiers in a voltage comb...
Current-mode adaptively hysteretic control for buck converters with fast transient response and improved output regulation This paper presents a current-mode adaptively hysteretic control (CMAHC) technique to achieve the fast transient response for DC-DC buck converters. A complementary full range current sensor comprising of chargingpath and discharging-path sensing transistors is proposed to track the inductor current seamlessly. With the proposed current-mode adaptively hysteretic topology, the inductor current is continuously monitored, and the adaptively hysteretic threshold is dynamically adjusted according to the feedback information comes from the output voltage level. Therefore, a fast load-transient response can be achieved. Besides, the output regulation performance is also improved by the proposed dynamic current-scaling circuitry (DCSC). Moreover, the proposed CMAHC topology can be used in a nearly zero RESR design configuration. The prototype fabricated using TSMC 0.25μm CMOS process occupies the area of 1.78mm2 including all bonding pads. Experimental results show that the output voltage ripple is smaller than 30mV over a wide loading current from 0 mA to 500 mA with maximum power conversion efficiency higher than 90%. The recovery time from light to heavy load (100 to 500 mA) is smaller than 5μs.
A Bidirectional Neural Interface IC With Chopper Stabilized BioADC Array and Charge Balanced Stimulator. We present a bidirectional neural interface with a 4-channel biopotential analog-to-digital converter (bioADC) and a 4-channel current-mode stimulator in 180 nm CMOS. The bioADC directly transduces microvolt biopotentials into a digital representation without a voltage-amplification stage. Each bioADC channel comprises a continuous-time first-order ΔΣ modulator with a chopper-stabilized OTA input ...
1.046196
0.045074
0.043889
0.040389
0.033333
0.016667
0.004148
0
0
0
0
0
0
0
Visual-Servoing Based Global Path Planning Using Interval Type-2 Fuzzy Logic Control. Mobile robot motion planning in an unstructured, static, and dynamic environment is faced with a large amount of uncertainties. In an uncertain working area, a method should be selected to address the existing uncertainties in order to plan a collision-free path between the desired two points. In this paper, we propose a mobile robot path planning method in the visualize plane using an overhead camera based on interval type-2 fuzzy logic (IT2FIS). We deal with a visual-servoing based technique for obstacle-free path planning. It is necessary to determine the location of a mobile robot in an environment surrounding the robot. To reach the target and for avoiding obstacles efficiently under different shapes of obstacle in an environment, an IT2FIS is designed to generate a path. A simulation of the path planning technique compared with other methods is performed. We tested the algorithm within various scenarios. Experiment results showed the efficiency of the generated path using an overhead camera for a mobile robot.
Stability and robust stability for systems with a time-varying delay To concern the stability and robust stability criteria for systems with time-varying delays, this note uses not only the time-varying-delayed state x(t-h(t)) but also the delay-upper-bounded state x(t-h¯) to exploit all possible information for the relationship among a current state x(t), an exactly delayed state x(t-h(t)), a marginally delayed state x(t-h¯), and the derivative of the state x˙(t), when constructing Lyapunov–Krasovskii functionals and some appropriate integral inequalities, originally suggested by Park (1999. A delay-dependent stability criterion for systems with uncertain time-invariant delays. IEEE Transactions on Automatic Control, 44(4), 876–877). Two fundamental criteria are provided for the cases where no bound of delay derivative is assumed and where an upper bound of delay derivative is assumed. Examples show the resulting criteria outperform all existing ones in the literature.
Robust Stability of Impulsive Systems: A Functional-Based Approach An improved functional-based approach for the stability analysis of linear uncertain impulsive systems relying on Lyapunov looped-functionals is provided. Looped functionals are peculiar functionals that allow to encode discrete-time stability criteria into continuous-time conditions and to consider non-monotonic Lyapunov functions along the trajectories of the impulsive system. Unlike usual discrete-time stability conditions, the obtained ones are convex in the system matrices, an important feature for extending the results to uncertain systems. It is emphasized in the examples that the proposed approach can be applied to a class of systems for which existing approaches are inconclusive, notably systems having unstable continuous and discrete dynamics.
Tracking Control of Robot Manipulators with Unknown Models: A Jacobian-Matrix-Adaption Method. Tracking control of robot manipulators is a fundamental and significant problem in robotic industry. As a conventional solution, the Jacobian-matrix-pseudo-inverse (JMPI) method suffers from two major limitations: one is the requirement on known information of the robot model such as parameter and structure; the other is the position error accumulation phenomenon caused by the open-loop nature. To...
A new looped-functional for stability analysis of the linear impulsive system •A more advanced explicit looped functional together with a new integral inequality is proposed. This looped-functional approach allows to consider Lyapunov functions that evolve non-monotonically along the trajectories of the systems in a new way, broadenin 6 then the admissible class of systerris which may be analyzed. More adjustable parameters are introduced to have less conservative in the new integral inequality and it is affine in the integral interval.•Theorem 3.1 are discrete-time stability results, which are expressed in continuous-time. Considering a discrete-time stability condition is much weaker than a continuous-time one since the continuous time Lyapunov function i not necessarily monotonically decreasing along the trajectories of the system anymore. This feature is extremely important in the current framework in order to cope with expansive jumps and unstable continuous-time dynamics. Using such a discrete-time approach, only the decrease of the function evaluated at impulsive instants is important. Then a novel framework for the stability analysis of impulsive systems is presented in this article and may be applied to the analysis of impulsive delayed systems or a wider class of systems.•Introduce the integral of the state as well as the cross terms of this integral and the impulsive state in the looped functional and take into consideration the information at both tk and tk+1 and both on the intervals x(tk)tox(t) and x(t) to x(tk+1) by the looped functional.•t−tk or tk+1−t
New Results for Sampled-Data Control of Interval Type-2 Fuzzy Nonlinear Systems This paper is devoted to the investigation of the interval type-2 (IT2) fuzzy sampled-data stabilization problem for the controlled plant subject to nonlinearities and parameter uncertainties. Some free-weighting matrices, slack matrices, and the bound information in membership functions are used to improve the stability analysis. Based on the Lyapunov–Krasovskii functional (LKF) theory, a new relaxed sufficient condition with fewer linear matrix inequality (LMI) constraints is derived. According to this criterion, the IT2 fuzzy sampled-data controller is devised to ensure the closed-loop system is asymptotically stable. Finally, three practical examples are provided to demonstrate the effectiveness and efficiency of the proposed design. Some comparisons show that the proposed algorithm is more simple and practical.
Control Design for Interval Type-2 Fuzzy Systems Under Imperfect Premise Matching This paper focuses on designing interval type-2 (IT2) control for nonlinear systems subject to parameter uncertainties. To facilitate the stability analysis and control synthesis, an IT2 Takagi-Sugeno (T-S) fuzzy model is employed to represent the dynamics of nonlinear systems of which the parameter uncertainties are captured by IT2 membership functions characterized by the lower and upper membership functions. A novel IT2 fuzzy controller is proposed to perform the control process, where the membership functions and number of rules can be freely chosen and different from those of the IT2 T-S fuzzy model. Consequently, the IT2 fuzzy-model-based (FMB) control system is with imperfectly matched membership functions, which hinders the stability analysis. To relax the stability analysis for this class of IT2 FMB control systems, the information of footprint of uncertainties and the lower and upper membership functions are taken into account for the stability analysis. Based on the Lyapunov stability theory, some stability conditions in terms of linear matrix inequalities are obtained to determine the system stability and achieve the control design. Finally, simulation and experimental examples are provided to demonstrate the effectiveness and the merit of the proposed approach.
Threaded code The concept of “threaded code” is presented as an alternative to machine language code. Hardware and software realizations of it are given. In software it is realized as interpretive code not needing an interpreter. Extensions and optimizations are mentioned.
Joint Optimization of Task Scheduling and Image Placement in Fog Computing Supported Software-Defined Embedded System. Traditional standalone embedded system is limited in their functionality, flexibility, and scalability. Fog computing platform, characterized by pushing the cloud services to the network edge, is a promising solution to support and strengthen traditional embedded system. Resource management is always a critical issue to the system performance. In this paper, we consider a fog computing supported software-defined embedded system, where task images lay in the storage server while computations can be conducted on either embedded device or a computation server. It is significant to design an efficient task scheduling and resource management strategy with minimized task completion time for promoting the user experience. To this end, three issues are investigated in this paper: 1) how to balance the workload on a client device and computation servers, i.e., task scheduling, 2) how to place task images on storage servers, i.e., resource management, and 3) how to balance the I/O interrupt requests among the storage servers. They are jointly considered and formulated as a mixed-integer nonlinear programming problem. To deal with its high computation complexity, a computation-efficient solution is proposed based on our formulation and validated by extensive simulation based studies.
Communication-efficient leader election and consensus with limited link synchrony We study the degree of synchrony required to implement the leader election failure detector Ω and to solve consensus in partially synchronous systems. We show that in a system with n processes and up to f process crashes, one can implement Ω and solve consensus provided there exists some (unknown) correct process with f outgoing links that are eventually timely. In the special case where f = 1 , an important case in practice, this implies that to implement Ω and solve consensus it is sufficient to have just one eventually timely link -- all the other links in the system, Θ(n2) of them, may be asynchronous. There is no need to know which link p → q is eventually timely, when it becomes timely, or what is its bound on message delay. Surprisingly, it is not even required that the source p or destination q of this link be correct: either p or q may actually crash, in which case the link p → q is eventually timely in a trivial way, and it is useless for sending messages. We show that these results are in a sense optimal: even if every process has f - 1 eventually timely links, neither Ω nor consensus can be solved. We also give an algorithm that implements Ω in systems where some correct process has f outgoing links that are eventually timely, such that eventually only f links carry messages, and we show that this is optimal. For f = 1 , this algorithm ensures that all the links, except for one, eventually become quiescent.
Software radio architecture: a mathematical perspective As the software radio makes its transition from research to practice, it becomes increasingly important to establish provable properties of the software radio architecture on which product developers and service providers can base technology insertion decisions. Establishing provable properties requires a mathematical perspective on the software radio architecture. This paper contributes to that perspective by critically reviewing the fundamental concept of the software radio, using mathematical models to characterize this rapidly emerging technology in the context of similar technologies like programmable digital radios. The software radio delivers dynamically defined services through programmable processing capacity that has the mathematical structure of the Turing machine. The bounded recursive functions, a subset of the total recursive functions, are shown to be the largest class of Turing-computable functions for which software radios exhibit provable stability in plug-and-play scenarios. Understanding the topological properties of the software radio architecture promotes plug-and-play applications and cost-effective reuse. Analysis of these topological properties yields a layered distributed virtual machine reference model and a set of architecture design principles for the software radio. These criteria may be useful in defining interfaces among hardware, middleware, and higher level software components that are needed for cost-effective software reuse
Investigation of the Energy Regeneration of Active Suspension System in Hybrid Electric Vehicles This paper investigates the idea of the energy regeneration of active suspension (AS) system in hybrid electric vehicles (HEVs). For this purpose, extensive simulation and control methods are utilized to develop a simultaneous simulation in which both HEV powertrain and AS systems are simulated in a unified medium. In addition, a hybrid energy storage system (ESS) comprising electrochemical batteries and ultracapacitors (UCs) is proposed for this application. Simulation results reveal that the regeneration of the AS energy results in an improved fuel economy. Moreover, by using the hybrid ESS, AS load fluctuations are transferred from the batteries to the UCs, which, in turn, will improve the efficiency of the batteries and increase their life.
PuDianNao: A Polyvalent Machine Learning Accelerator Machine Learning (ML) techniques are pervasive tools in various emerging commercial applications, but have to be accommodated by powerful computer systems to process very large data. Although general-purpose CPUs and GPUs have provided straightforward solutions, their energy-efficiencies are limited due to their excessive supports for flexibility. Hardware accelerators may achieve better energy-efficiencies, but each accelerator often accommodates only a single ML technique (family). According to the famous No-Free-Lunch theorem in the ML domain, however, an ML technique performs well on a dataset may perform poorly on another dataset, which implies that such accelerator may sometimes lead to poor learning accuracy. Even if regardless of the learning accuracy, such accelerator can still become inapplicable simply because the concrete ML task is altered, or the user chooses another ML technique. In this study, we present an ML accelerator called PuDianNao, which accommodates seven representative ML techniques, including k-means, k-nearest neighbors, naive bayes, support vector machine, linear regression, classification tree, and deep neural network. Benefited from our thorough analysis on computational primitives and locality properties of different ML techniques, PuDianNao can perform up to 1056 GOP/s (e.g., additions and multiplications) in an area of 3.51 mm^2, and consumes 596 mW only. Compared with the NVIDIA K20M GPU (28nm process), PuDianNao (65nm process) is 1.20x faster, and can reduce the energy by 128.41x.
A VCO-Based Nonuniform Sampling ADC Using a Slope-Dependent Pulse Generator This paper presents a voltage-controlled oscillator (VCO)-based nonuniform sampling analog-to-digital converter (ADC) as an alternative to the level-crossing (LC)-based converters for digitizing biopotential signals. This work aims to provide a good signal-to-noise-and-distortion ratio at a low average sampling rate. In the proposed conversion method, a slope-dependent pulse generation block is used to provide a variable sample rate adjusted according to the input signal's slope. Simulation results show that the introduced method meets a target reconstruction quality with a sampling rate approaching 92 Sps, while on the same MIT-BIH Arrhythmia N 106 ECG benchmark, the classic LC-based approach requires a sampling rate higher than 500 Sps. The benefits of the proposed method are more remarkable when the input signal is very noisy. The proposed ADC achieves a compression ratio close to 4, but with only 5.4% root-mean-square difference when tested using the MIT-BIH Arrhythmia Database.
1.2496
0.2496
0.2496
0.2496
0.2496
0.1248
0.039808
0
0
0
0
0
0
0
Initializing sensor networks of non-uniform density in the weak sensor model Assumptions about node density in the Sensor Networks literature are frequently too strong or too weak. Neither absolutely arbitrary nor uniform deployment seem feasible in most of the intended applications of sensor nodes. We present a Weak Sensor Model-compatible distributed protocol for hop-optimal network initialization, under the assumption that the maximum density of nodes is some value Δ known by all of the nodes. In order to prove lower bounds, we observe that all nodes must communicate with some other node in order to join the network, and we call the problem of achieving such a communication the Group Therapy Problem. We show lower bounds for the Group Therapy Problem in Radio Networks of maximum density Δ, regardless of the use of randomization, and a stronger lower bound for the important class of randomized fair protocols. We also show that even when nodes are distributed uniformly, the same lower bound holds, even in expectation and even for the simpler problem of Clear Transmission.
A survey on routing protocols for wireless sensor networks Recent advances in wireless sensor networks have led to many new protocols specifically designed for sensor networks where energy awareness is an essential consideration. Most of the attention, however, has been given to the routing protocols since they might differ depending on the application and network architecture. This paper surveys recent routing protocols for sensor networks and presents a classification for the various approaches pursued. The three main categories explored in this paper are data-centric, hierarchical and location-based. Each routing protocol is described and discussed under the appropriate category. Moreover, protocols using contemporary methodologies such as network flow and quality of service modeling are also discussed. The paper concludes with open research issues.
Analysis of Distributed Random Grouping for Aggregate Computation on Wireless Sensor Networks with Randomly Changing Graphs Dynamical connection graph changes are inherent in networks such as peer-to-peer networks, wireless ad hoc networks, and wireless sensor networks. Considering the influence of the frequent graph changes is thus essential for precisely assessing the performance of applications and algorithms on such networks. In this paper, using stochastic hybrid systems (SHSs), we model the dynamics and analyze the performance of an epidemic-like algorithm, distributed random grouping (DRG), for average aggregate computation on a wireless sensor network with dynamical graph changes. Particularly, we derive the convergence criteria and the upper bounds on the running time of the DRG algorithm for a set of graphs that are individually disconnected but jointly connected in time. An effective technique for the computation of a key parameter in the derived bounds is also developed. Numerical results and an application extended from our analytical results to control the graph sequences are presented to exemplify our analysis.
Brief announcement: locality-based aggregate computation in wireless sensor networks We present DRR-gossip, an energy-efficient and robust aggregate computation algorithm in sensor networks. We prove that the DRR-gossip algorithm requires O(n) messages and O(n3/2/log1/2 n) one-hop wireless transmissions to obtain aggregates on a random geometric graph. This reduces the energy consumption by at least a factor of 1/log n over the standard uniform gossip algorithm. Experiments validate the theoretical results and show that DRR-gossip needs much less transmissions than other gossip-based schemes.
Directed diffusion for wireless sensor networking Advances in processor, memory, and radio technology will enable small and cheap nodes capable of sensing, communication, and computation. Networks of such nodes can coordinate to perform distributed sensing of environmental phenomena. In this paper, we explore the directed-diffusion paradigm for such coordination. Directed diffusion is data-centric in that all communication is for named data. All nodes in a directed-diffusion-based network are application aware. This enables diffusion to achieve energy savings by selecting empirically good paths and by caching and processing data in-network (e.g., data aggregation). We explore and evaluate the use of directed diffusion for a simple remote-surveillance sensor network analytically and experimentally. Our evaluation indicates that directed diffusion can achieve significant energy savings and can outperform idealized traditional schemes (e.g., omniscient multicast) under the investigated scenarios.
On the time-complexity of broadcast in multi-hop radio networks: an exponential gap between determinism and randomization The time-complexity of deterministic and randomized protocols for achieving broadcast (distributing a message from a source to all other nodes) in arbitrary multi-hop radio networks is investigated. In many such networks, communication takes place in synchronous time-slots. A processor receives a message at a certain time-slot if exactly one of its neighbors transmits at that time-slot. We assume no collision-detection mechanism; i.e., it is not always possible to distinguish the case where no neighbor transmits from the case where several neighbors transmit simultaneously. We present a randomized protocol that achieves broadcast in time which is optimal up to a logarithmic factor. In particular, with probability 1 --E, the protocol achieves broadcast within O((D + log n/s) 'log n) time-slots, where n is the number of processors in the network and D its diameter. On the other hand, we prove a linear lower bound on the deterministic time-complexity of broadcast in this model. Namely, we show that any deterministic broadcast protocol requires 8(n) time-slots, even if the network has diameter 3, and n is known to all processors. These two results demonstrate an exponential gap in complexity between randomization and determinism.
The price of validity in dynamic networks Massive-scale self-administered networks like Peer-to-Peer and Sensor Networks have data distributed across thousands of participant hosts. These networks are highly dynamic with short-lived hosts being the norm rather than an exception. In recent years, researchers have investigated best-effort algorithms to efficiently process aggregate queries (e.g., sum, count, average, minimum and maximum) [6, 13, 21, 34, 35, 37] on these networks. Unfortunately, query semantics for best-effort algorithms are ill-defined, making it hard to reason about guarantees associated with the result returned. In this paper, we specify a correctness condition, single-site validity, with respect to which the above algorithms are best-effort. We present a class of algorithms that guarantee validity in dynamic networks. Experiments on real-life and synthetic network topologies validate performance of our algorithms, revealing the hitherto unknown price of validity.
Information Spreading in Stationary Markovian Evolving Graphs Markovian evolving graphs are dynamic-graph models where the links among a fixed set of nodes change during time according to an arbitrary Markovian rule. They are extremely general and they can well describe important dynamic-network scenarios. We study the speed of information spreading in the stationary phase by analyzing the completion time of the flooding mechanism. We prove a general theorem that establishes an upper bound on flooding time in any stationary Markovian evolving graph in terms of its node-expansion properties. We apply our theorem in two natural and relevant cases of such dynamic graphs. Geometric Markovian evolving graphs where the Markovian behaviour is yielded by n mobile radio stations, with fixed transmission radius, that perform independent random walks over a square region of the plane. Edge-Markovian evolving graphs where the probability of existence of any edge at time t depends on the existence (or not) of the same edge at time t-1. In both cases, the obtained upper bounds hold with high probability and they are nearly tight. In fact, they turn out to be tight for a large range of the values of the input parameters. As for geometric Markovian evolving graphs, our result represents the first analytical upper bound for flooding time on a class of concrete mobile networks.
Achievable rates in cognitive radio channels Cognitive radio promises a low-cost, highly flexible alternative to the classic single-frequency band, single-protocol wireless device. By sensing and adapting to its environment, such a device is able to fill voids in the wireless spectrum and can dramatically increase spectral efficiency. In this paper, the cognitive radio channel is defined as a two-sender, two-receiver interference channel in which sender 2 obtains the encoded message sender 1 plans to transmit. We consider two cases: in the genie-aided cognitive radio channel, sender 2 is noncausally presented the data to be transmitted by sender 1 while in the causal cognitive radio channel, the data is obtained causally. The cognitive radio at sender 2 may then choose to transmit simultaneously over the same channel, as opposed to waiting for an idle channel as is traditional for a cognitive radio. Our main result is the development of an achievable region which combines Gel'fand-Pinkser coding with an achievable region construction for the interference channel. In the additive Gaussian noise case, this resembles dirty-paper coding, a technique used in the computation of the capacity of the Gaussian multiple-input multiple-output (MIMO) broadcast channel. Numerical evaluation of the region in the Gaussian noise case is performed, and compared to an inner bound, the interference channel, and an outer bound, a modified Gaussian MIMO broadcast channel. Results are also extended to the case in which the message is causally obtained.
A study of phase noise in CMOS oscillators This paper presents a study of phase noise in two inductorless CMOS oscillators. First-order analysis of a linear oscillatory system leads to a noise shaping function and a new definition of . A linear model of CMOS ring oscillators is used to calculate their phase noise, and three phase noise phenomena, namely, additive noise, high-frequency multiplicative noise, and low-frequency multiplicative noise, are identified and formulated. Based on the same concepts, a CMOS relaxation oscillator is also analyzed. Issues and techniques related to simulation of noise in the time domain are described, and two prototypes fabricated in a 0.5- m CMOS technology are used to investigate the accuracy of the theoretical predictions. Compared with the measured results, the calculated phase noise values of a 2-GHz ring oscillator and a 900-MHz relaxation oscillator at 5 MHz offset have an error of approximately 4 dB. OLTAGE-CONTROLLED oscillators (VCO's) are an integral part of phase-locked loops, clock recovery cir- cuits, and frequency synthesizers. Random fluctuations in the output frequency of VCO's, expressed in terms of jitter and phase noise, have a direct impact on the timing accuracy where phase alignment is required and on the signal-to-noise ratio where frequency translation is performed. In particular, RF oscillators employed in wireless tranceivers must meet stringent phase noise requirements, typically mandating the use of passive LC tanks with a high quality factor . However, the trend toward large-scale integration and low cost makes it desirable to implement oscillators monolithically. The paucity of literature on noise in such oscillators together with a lack of experimental verification of underlying theories has motivated this work. This paper provides a study of phase noise in two induc- torless CMOS VCO's. Following a first-order analysis of a linear oscillatory system and introducing a new definition of , we employ a linearized model of ring oscillators to obtain an estimate of their noise behavior. We also describe the limitations of the model, identify three mechanisms leading to phase noise, and use the same concepts to analyze a CMOS relaxation oscillator. In contrast to previous studies where time-domain jitter has been investigated (1), (2), our analysis is performed in the frequency domain to directly determine the phase noise. Experimental results obtained from a 2-GHz ring oscillator and a 900-MHz relaxation oscillator indicate that, despite many simplifying approximations, lack of accurate MOS models for RF operation, and the use of simple noise
An architecture for survivable coordination in large distributed systems Coordination among processes in a distributed system can be rendered very complex in a large-scale system where messages may be delayed or lost and when processes may participate only transiently or behave arbitrarily, e.g., after suffering a security breach. In this paper, we propose a scalable architecture to support coordination in such extreme conditions. Our architecture consists of a collection of persistent data servers that implement simple shared data abstractions for clients, without trusting the clients or even the servers themselves. We show that, by interacting with these untrusted servers, clients can solve distributed consensus, a powerful and fundamental coordination primitive. Our architecture is very practical and we describe the implementation of its main components in a system called Fleet.
Cross-layer sensors for green cognitive radio. Green cognitive radio is a cognitive radio (CR) that is aware of sustainable development issues and deals with an additional constraint as regards the decision-making function of the cognitive cycle. In this paper, it is explained how the sensors distributed throughout the different layers of our CR model could help on taking the best decision in order to best contribute to sustainable development.
Understanding contention-based channels and using them for defense Microarchitectural resources such as caches and predictors can be used to leak information across security domains. Significant prior work has demonstrated attacks and defenses for specific types of such microarchitectural side and covert channels. In this paper, we introduce a general mathematical study of microarchitectural channels using information theory. Our conceptual contribution is a simple mathematical abstraction that captures the common characteristics of all microarchitectural channels. We call this the Bucket model and it reveals that microarchitectural channels are fundamentally different from side and covert channels in networking. We then quantify the communication capacity of several microarchitectural covert channels (including channels that rely on performance counters, AES hardware and memory buses) and measure bandwidths across both KVM based heavy-weight virtualization and light-weight operating-system level isolation. We demonstrate channel capacities that are orders of magnitude higher compared to what was previously considered possible. Finally, we introduce a novel way of detecting intelligent adversaries that try to hide while running covert channel eavesdropping attacks. Our method generalizes a prior detection scheme (that modeled static adversaries) by introducing noise that hides the detection process from an intelligent eavesdropper.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.093005
0.121383
0.083544
0.083544
0.071609
0.038527
0.002912
0.000263
0
0
0
0
0
0
A CMOS Burst-Mode Transmitter With Watt-Level RF PA and Flexible Fully Digital Front-End A fully digital burst-mode handheld transmitter with power amplifier for the 900-MHz band is presented. The transmitter front-end consists of a digital polar modulator which uses pulse width modulation (PWM) for the amplitude modulator. Phase modulation (PM) is implemented by shifting the carrier in time. Both the PWM and the PM are implemented using asynchronous delay lines which allow time resolutions down to 10 ps without the need for high-frequency clock signals. The modulated signal is amplified by a Class B amplifier which uses power combining to reach watt-level output power. The transmitter is implemented in standard CMOS technology. When transmitting a modulated signal with a peak-to-average power ratio (PAPR) of 10.3 dB and 5-MHz bandwidth, the burst-mode transmitter meets the stringent error-vector-magnitude (EVM) specifications of 5.6% at 23.1-dBm average output power with 11.7% power added efficiency (PAE).
Frequency-Domain Analysis of Digital PWM-Based RF Modulators for Flexible Wireless Transmitters This paper presents a frequency-domain analysis of the noise and distortion terms produced by a digital RF modulator that uses pulse width modulation (PWM) for the amplitude modulation and a square wave as RF carrier. Insight in these terms is important as they limit the error vector magnitude (EVM) the modulator can achieve. For each of the terms, frequency-domain expressions are derived which are valid as long as the quantization noise is small and the digital PWM is sufficiently close to natural-sampling PWM. The dependency of the terms on the different system parameters is estimated, and the calculations are supported and complemented with simulation results. The presented analysis improves the understanding of the dominant noise and distortion sources, which can significantly speed up the design of PWM-based transmitters.
A 2.4-GHz 20-40-MHz Channel WLAN Digital Outphasing Transmitter Utilizing a Delay-Based Wideband Phase Modulator in 32-nm CMOS. A digital outphasing transmitter is presented for 2.4-GHz WiFi. The transmitter consists of two delay-based phase modulators and a 26-dBm integrated switching class-D power amplifier. The delay-based phase modulator delays incoming LO edges with a resolution of 1.4 ps (8 bit) required to meet WiFi requirements. A phase MUX architecture is proposed to implement switching between phases once every L...
A fully digital multimode polar transmitter employing 17b RF DAC in 3G mode.
Power Amplifier Selection for LINC Applications. Linear amplification with nonlinear components (LINC) using a nonisolating combiner has the potential for high efficiency and good linearity. In past work, the interaction between two power amplifiers has been interpreted as a time-varying load presented at the output of amplifiers, and the linearity and efficiency of the LINC system has been evaluated according to how the power amplifiers respond...
A Transmitter Architecture Based on Delta–Sigma Modulation and Switch-Mode Power Amplification This brief presents a method of deploying RF switch-mode power amplification for varying envelope signals. Thereby the power amplifier can be operated as a switch with a high power efficiency as the result. The key idea is to transmit either a full RF period or none at all in such a way that the correct modulated RF signal is obtained after filtering. This is accomplished in a novel configuration of a low-pass DeltaSigma modulator using a phase modulated clock combined with a simple AND-gate. The designed modulator is easy to implement, displays very good linearity and offers time domain signals that promote the power efficiency of the power amplifier. The working principle is described through theory and simulations, and validation is done via measurements on a prototype of the modulator. Measurements on the prototype show that the presented modulator modulates a UMTS signal with more than 10-dB margin to the spectrum mask and EVM below 0.85% RMS (req<17.5%). Delta-sigma, power amplifier (PA), RF, switch mode, transmitter architecture, varying envelope.
A 5.8 GHz 1 V Linear Power Amplifier Using a Novel On-Chip Transformer Power Combiner in Standard 90 nm CMOS A fully integrated 5.8 GHz Class AB linear power amplifier (PA) in a standard 90 nm CMOS process using thin oxide transistors utilizes a novel on-chip transformer power combining network. The transformer combines the power of four push-pull stages with low insertion loss over the bandwidth of interest and is compatible with standard CMOS process without any additional analog or RF enhancements. Wi...
Randomized algorithms This text by two well-known experts in the field presents the basic concepts in the design and analysis of randomized algorithms at a level accessible to beginning graduate students, professionals and researchers.
Building efficient wireless sensor networks with low-level naming In most distributed systems, naming of nodes for low-level communication leverages topological location (such as node addresses) and is independent of any application. In this paper, we investigate an emerging class of distributed systems where low-level communication does not rely on network topological location. Rather, low-level communication is based on attributes that are external to the network topology and relevant to the application. When combined with dense deployment of nodes, this kind of named data enables in-network processing for data aggregation, collaborative signal processing, and similar problems. These approaches are essential for emerging applications such as sensor networks where resources such as bandwidth and energy are limited. This paper is the first description of the software architecture that supports named data and in-network processing in an operational, multi-application sensor-network. We show that approaches such as in-network aggregation and nested queries can significantly affect network traffic. In one experiment aggregation reduces traffic by up to 42% and nested queries reduce loss rates by 30%. Although aggregation has been previously studied in simulation, this paper demonstrates nested queries as another form of in-network processing, and it presents the first evaluation of these approaches over an operational testbed.
Peer counting and sampling in overlay networks: random walk methods In this article we address the problem of counting the number of peers in a peer-to-peer system, and more generally of aggregating statistics of individual peers over the whole system. This functionality is useful in many applications, but hard to achieve when each node has only a limited, local knowledge of the whole system. We propose two generic techniques to solve this problem. The Random Tour method is based on the return time of a continuous time random walk to the node originating the query. The Sample and Collide method is based on counting the number of random samples gathered until a target number of redundant samples are obtained. It is inspired by the "birthday paradox" technique of [6], upon which it improves by achieving a target variance with fewer samples. The latter method relies on a sampling sub-routine which returns randomly chosen peers. Such a sampling algorithm is of independent interest. It can be used, for instance, for neighbour selection by new nodes joining the system. We use a continuous time random walk to obtain such samples. We analyse the complexity and accuracy of the two methods. We illustrate in particular how expansion properties of the overlay affect their performance.
A bridging model for parallel computation, communication, and I/O
A Dht-Based Discovery Service For The Internet Of Things Current trends towards the Future Internet are envisaging the conception of novel services endowed with context-aware and autonomic capabilities to improve end users' quality of life. The Internet of Things paradigm is expected to contribute towards this ambitious vision by proposing models and mechanisms enabling the creation of networks of "smart things" on a large scale. It is widely recognized that efficient mechanisms for discovering available resources and capabilities are required to realize such vision. The contribution of this work consists in a novel discovery service for the Internet of Things. The proposed solution adopts a peer-to-peer approach for guaranteeing scalability, robustness, and easy maintenance of the overall system. While most existing peer-to-peer discovery services proposed for the IoT support solely exact match queries on a single attribute (i. e., the object identifier), our solution can handle multiattribute and range queries. We defined a layered approach by distinguishing three main aspects: multiattribute indexing, range query support, peer-to-peer routing. We chose to adopt an over-DHT indexing scheme to guarantee ease of design and implementation principles. We report on the implementation of a Proof of Concept in a dangerous goods monitoring scenario, and, finally, we discuss test results for structural properties and query performance evaluation.
Understanding contention-based channels and using them for defense Microarchitectural resources such as caches and predictors can be used to leak information across security domains. Significant prior work has demonstrated attacks and defenses for specific types of such microarchitectural side and covert channels. In this paper, we introduce a general mathematical study of microarchitectural channels using information theory. Our conceptual contribution is a simple mathematical abstraction that captures the common characteristics of all microarchitectural channels. We call this the Bucket model and it reveals that microarchitectural channels are fundamentally different from side and covert channels in networking. We then quantify the communication capacity of several microarchitectural covert channels (including channels that rely on performance counters, AES hardware and memory buses) and measure bandwidths across both KVM based heavy-weight virtualization and light-weight operating-system level isolation. We demonstrate channel capacities that are orders of magnitude higher compared to what was previously considered possible. Finally, we introduce a novel way of detecting intelligent adversaries that try to hide while running covert channel eavesdropping attacks. Our method generalizes a prior detection scheme (that modeled static adversaries) by introducing noise that hides the detection process from an intelligent eavesdropper.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.2232
0.2232
0.0613
0.025437
0.007766
0.0014
0.000272
0
0
0
0
0
0
0
A spectrally modulated, spectrally encoded analytic framework for carrier interferometry signals This paper applies a recently introduced general analytic framework for spectrally modulated and spectrally encoded (SMSE) signals to carrier interferometry (CI) signals. The SMSE framework mathematically incorporates the waveform adaptivity and diversity found in SMSE signals. Future fourth generation (4G) radios are likely to operate using cognitive principles whereby the system adapts to changing traffic loads, interfering signals, spectrum availability, and channel conditions. Because 4G architectures are contemplating the use of SMSE techniques to enable cognitive communications, a general analytic framework was recently introduced in which SMSE signals can be derived, analyzed, and implemented. This paper adopts this concise mathematical model and applies it to CI signals, including those that couple CI coding techniques with orthogonal frequency division multiplexing (OFDM), coded OFDM, or multi-carrier code division multiple access (MC-CDMA). As shown herein, the model may be implementable using adaptive software defined radio (SDR) techniques.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Magnetic Field Measurement Based on the Sagnac Interferometer With a Ferrofluid-Filled High-Birefringence Photonic Crystal Fiber. A compact optical fiber magnetic field sensor based on the principle of the Sagnac interferometer is proposed. Different from the conventional ones, a ferrofluid-filled high-birefringence photonic crystal fiber (HB-PCF) is inserted into the Sagnac as a magnetic field sensing element. The refractive index of the ferrofluid filled in the cladding air holes of the HB-PCF will change with respect to t...
Ultrasensitive Magnetic Field Sensing Based on Refractive-Index-Matched Coupling. An ultrasensitive magnetic field sensor is proposed and investigated experimentally. The no-core fiber is fusion-spliced between two pieces of single-mode fibers and then immersed in magnetic fluid with an appropriate value of refractive index. Under the refractive-index-matched coupling condition, the guided mode becomes leaky and a coupling wavelength dip in the transmission spectrum of the structure is observed. The coupling wavelength dip is extremely sensitive to the ambient environment. The excellent sensitivity to the refractive index is measured to be 116.681 m/RIU (refractive index unit) in the refractive index range of 1.45691-1.45926. For the as-fabricated sensors, the highest magnetic field sensing sensitivities of 6.33 and 1.83 nm/mT are achieved at low and high fields, respectively. The sensitivity is considerably enhanced compared with those of previously designed, similar structures.
Integrated Fluxgate Magnetometer for Use in Isolated Current Sensing. This paper presents two integrated magnetic sensor ICs for isolated current measurement that have a fluxgate magnetometer co-integrated along with circuitry on a die. The integrated fluxgate has a sensitivity of 250 V/T and a 500 ksps readout circuit and requires only 5.4 mW for fluxgate excitation, which is 20x more power-efficient than the state of the art. The fluxgate magnetometer was used to ...
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Unreliable failure detectors for reliable distributed systems We introduce the concept of unreliable failure detectors and study how they can be used to solve Consensus in asynchronous systems with crash failures. We characterise unreliable failure detectors in terms of two properties—completeness and accuracy. We show that Consensus can be solved even with unreliable failure detectors that make an infinite number of mistakes, and determine which ones can be used to solve Consensus despite any number of crashes, and which ones require a majority of correct processes. We prove that Consensus and Atomic Broadcast are reducible to each other in asynchronous systems with crash failures; thus, the above results also apply to Atomic Broadcast. A companion paper shows that one of the failure detectors introduced here is the weakest failure detector for solving Consensus [Chandra et al. 1992].
Max-Min D-Cluster Formation in Wireless Ad Hoc Networks An ad hoc network may be logically represented as a set of clusters. The clusterheads form a -hop dominating set. Each node is at most hops from a clusterhead. Clusterheads form a virtual backbone and may be used to route packets for nodes in their cluster. Previous heuristics restricted themselves to -hop clusters. We show that the minimum -hop dominating set problem is NP-complete. Then we present a heuristic to form -clusters in a wireless ad hoc network. Nodes are assumed to have non-deterministic mobility pattern. Clusters are formed by diffusing node identities along the wireless links. When the heuristic terminates, a node either becomes a clusterhead, or is at most wireless hops away from its clusterhead. The value of is a parameter of the heuristic. The heuristic can be run either at regular intervals, or whenever the network configura- tion changes. One of the features of the heuristic is that it t ends to re-elect existing clusterheads even when the network configuration c hanges. This helps to reduce the communication overheads during transition from old clusterheads to new clusterheads. Also, there is a tendency to evenly dis- tribute the mobile nodes among the clusterheads, and evently distribute the responsibility of acting as clusterheads among all nodes. Thus, the heuristic is fair and stable. Simulation experiments demonstrate that the proposed heuristic is better than the two earlier heuristics, namely the LCA (1) and Degree based (11) solutions.
Differential Power Analysis . Cryptosystem designers frequently assume that secrets willbe manipulated in closed, reliable computing environments. Unfortunately,actual computers and microchips leak information about the operationsthey process. This paper examines specific methods for analyzingpower consumption measurements to find secret keys from tamperresistant devices. We also discuss approaches for building cryptosystemsthat can operate securely in existing hardware that leaks information.Keywords:...
Fog computing and its role in the internet of things Fog Computing extends the Cloud Computing paradigm to the edge of the network, thus enabling a new breed of applications and services. Defining characteristics of the Fog are: a) Low latency and location awareness; b) Wide-spread geographical distribution; c) Mobility; d) Very large number of nodes, e) Predominant role of wireless access, f) Strong presence of streaming and real time applications, g) Heterogeneity. In this paper we argue that the above characteristics make the Fog the appropriate platform for a number of critical Internet of Things (IoT) services and applications, namely, Connected Vehicle, Smart Grid, Smart Cities, and, in general, Wireless Sensors and Actuators Networks (WSANs).
Quadratic programming with one negative eigenvalue is NP-hard We show that the problem of minimizing a concave quadratic function with one concave direction is NP-hard. This result can be interpreted as an attempt to understand exactly what makes nonconvex quadratic programming problems hard. Sahni in 1974 [8] showed that quadratic programming with a negative definite quadratic term (n negative eigenvalues) is NP-hard, whereas Kozlov, Tarasov and Hacijan [2] showed in 1979 that the ellipsoid algorithm solves the convex quadratic problem (no negative eigenvalues) in polynomial time. This report shows that even one negative eigenvalue makes the problem NP-hard.
Backwards-compatible array bounds checking for C with very low overhead The problem of enforcing correct usage of array and pointer references in C and C++ programs remains unsolved. The approach proposed by Jones and Kelly (extended by Ruwase and Lam) is the only one we know of that does not require significant manual changes to programs, but it has extremely high overheads of 5x-6x and 11x-12x in the two versions. In this paper, we describe a collection of techniques that dramatically reduce the overhead of this approach, by exploiting a fine-grain partitioning of memory called Automatic Pool Allocation. Together, these techniques bring the average overhead checks down to only 12% for a set of benchmarks (but 69% for one case). We show that the memory partitioning is key to bringing down this overhead. We also show that our technique successfully detects all buffer overrun violations in a test suite modeling reported violations in some important real-world programs.
Phoenix: Detecting and Recovering from Permanent Processor Design Bugs with Programmable Hardware Although processor design verification consumes ever-increasing resources, many design defects still slip into production silicon. In a few cases, such bugs have caused expensive chip recalls. To truly improve productivity, hardware bugs should be handled like system software ones, with vendors periodically releasing patches to fix hardware in the field. Based on an analysis of serious design defects in current AMD, Intel, IBM, and Motorola processors, this paper proposes and evaluates Phoenix -- novel field-programmable on-chip hardware that detects and recovers from design defects. Phoenix taps key logic signals and, based on downloaded defect signatures, combines the signals into conditions that flag defects. On defect detection, Phoenix flushes the pipeline and either retries or invokes a customized recovery handler. Phoenix induces negligible slowdown, while adding only 0.05% area and 0.48% wire overheads. Phoenix detects all the serious defects that are triggered by concurrent control signals. Moreover, it recovers from most of them, and simplifies recovery for the rest. Finally, we present an algorithm to automatically size Phoenix for new processors.
Design of ultra-wide-load, high-efficient DC-DC buck converters The paper presents the design of a current-mode control DC-DC buck converter with pulse-width modulation (PWM) mode. The converter achieves a current load ranged from 50 mA to 500 mA over 90% efficiency, and the maximum power efficiency is 95.6%, where the circuit was simulated with the TSMC 0.35 um CMOS process. In order to achieve ultra-wide-load high efficiency, this paper implements with two PMOS transistors as switches. Results show that the converter achieves above 90% efficiency at the range from 30 mA to 1200 mA with a maximum efficiency of 96.36%. Results show that, with the additional switch transistor, the current load range is expanded more than double. With two PMOS transistors, the proposed converter can also achieve 3 different load ranges so that it can be programmed for the applications which are operated at those three different load ranges.
Understanding contention-based channels and using them for defense Microarchitectural resources such as caches and predictors can be used to leak information across security domains. Significant prior work has demonstrated attacks and defenses for specific types of such microarchitectural side and covert channels. In this paper, we introduce a general mathematical study of microarchitectural channels using information theory. Our conceptual contribution is a simple mathematical abstraction that captures the common characteristics of all microarchitectural channels. We call this the Bucket model and it reveals that microarchitectural channels are fundamentally different from side and covert channels in networking. We then quantify the communication capacity of several microarchitectural covert channels (including channels that rely on performance counters, AES hardware and memory buses) and measure bandwidths across both KVM based heavy-weight virtualization and light-weight operating-system level isolation. We demonstrate channel capacities that are orders of magnitude higher compared to what was previously considered possible. Finally, we introduce a novel way of detecting intelligent adversaries that try to hide while running covert channel eavesdropping attacks. Our method generalizes a prior detection scheme (that modeled static adversaries) by introducing noise that hides the detection process from an intelligent eavesdropper.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.2
0.2
0.04
0
0
0
0
0
0
0
0
0
0
0
A fully differential ultra-compact broadband transformer based quadrature generation scheme This paper presents an ultra-compact transformer-based quadrature generation scheme, which converts a differential input signal to fully differential quadrature outputs with low passive loss, broad bandwidth, and robustness against process variations. A new layout strategy is proposed to implement this 6-port transformer-based network within only one inductor-footprint for significant area saving. A 5 GHz quadrature generation design is implemented in a standard 65 nm CMOS process with a core area of only 260 μm by 260 μm, achieving size reduction of over 1,600 times compared to a 5GHz λ/4 branch-line coupler. This implementation achieves 0.82 dB signal loss at 5 GHz and maximum 3.8° phase error and ±0.5dB amplitude mismatch within a bandwidth of 13% (4.75 GHz to 5.41 GHz). Measurement results over 9 independent samples show a standard phase deviation of 1.9° verifying the robustness of the design.
Quantization Noise Suppression in Digitally Segmented Amplifiers In this paper, we consider the problem of out-of-band quantization noise suppression in the general family of direct digital-to-RF (DDRF) conversion circuits, where the RF carrier is amplitude modulated by a quantized representation of the baseband signal. Hence, it is desired to minimize the out-of-band quantization noise in order to meet stringent requirements such as receive-band noise levels in frequency-division duplex transceivers. In this paper, we address the problem of out-of-band quantization noise by introducing a novel signal-processing solution, which we refer to as ldquosegmented filtering (SF).rdquo We assess the capability of the proposed SF solution by means of performance analysis and results that have been obtained via circuit-level computer simulations as well as laboratory measurements. Our proposed approach has demonstrated the ability to preserve the required signal quality and power amplifier (PA) efficiency while providing more than 35-dB attenuation of the quantization noise, thus eliminating the need for substantial post-PA passband RF filtering.
A 1.9 GHz CMOS Power Amplifier With Embedded Linearizer to Compensate AM-PM Distortion. A series combining transformer(SCT)-based, watt-level 1.9 GHz linear CMOS power amplifier with an on-chip linearizer is demonstrated. Proposed compact, predistortion-based linearizer is embedded in the two-stage PA to compensate AM-PM distortion of the cascode power stages, and improve ACLR of 3GPP WCDMA uplink signal by 2.6 dB at 28.0 dBm output power. The designed interstage power distributor wi...
CMOS Doherty Amplifier With Variable Balun Transformer and Adaptive Bias Control for Wireless LAN Application This paper presents a novel CMOS Doherty power amplifier (PA) with an impedance inverter using a variable balun transformer (VBT) and adaptive bias control of an auxiliary amplifier. Unlike a conventional quarter-wavelength (λ/4) transmission line impedance inverter of a Doherty PA, the proposed VBT impedance inverter can achieve load modulation without any phase delay circuit. As a result, a λ/4 phase compensation circuit at the input path of the auxiliary amplifier can be removed, and the total size of the Doherty PA can be reduced. Additionally, an enhancement of the power efficiency at backed-off power levels can successfully be achieved with an adaptive gate bias in a common gate stage of the auxiliary amplifier. The PA, fabricated with 0.13-μm CMOS technology, achieved a 1-dB compression point (P1 dB) of 31.9 dBm and a power-added efficiency (PAE) at P1 dB of 51%. When the PA is tested with 802.11g WLAN orthogonal frequency division multiplexing (OFDM) signal of 54 Mb/s, a 25-dB error vector magnitude (EVM) compliant output power of 22.8 dBm and a PAE of 30.1% are obtained, respectively.
A 90-nm CMOS Doherty power amplifier with minimum AM-PM distortion A linear Doherty amplifier is presented. The design reduces AM-PM distortion by optimizing the device-size ratio of the carrier and peak amplifiers to cancel each other's phase variation. Consequently, this design achieves both good linearity and high backed-off efficiency associated with the Doherty technique, making it suitable for systems with large peak-to-average power ratio (WLAN, WiMAX, etc.). The fully integrated design has on-chip quadrature hybrid coupler, impedance transformer, and output matching networks. The experimental 90-nm CMOS prototype operating at 3.65 GHz achieves 12.5% power-added efficiency (PAE) at 6 dB back-off, while exceeding IEEE 802.11a -25 dB error vector magnitude (EVM) linearity requirement (using 1.55-V supply). A 28.9 dBm maximum Psat is achieved with 39% PAE (using 1.85-V supply). The active die area is 1.2 mm2.
2.8 A broadband CMOS digital power amplifier with hybrid Class-G Doherty efficiency enhancement Spectrum-efficient modulations in modern wireless systems often result in large peak-to-average power ratios (PAPRs) for the transmitted signals. Therefore, PA efficiency at deep power back-off (PBO) levels (e.g., -12dB) becomes critical to extend the mobile's battery life. Classic techniques, i.e., outphasing, envelope tracking, and Doherty PAs, offer marginal efficiency improvement at deep PBO in practice. Dual-mode PAs require switches at the PA output for high-/low-power mode selection [1,2], posing reliability and linearity challenges. Although simple supply switching (Class-G) is effective at deep PBO, it only offers Class-B-like PBO efficiency in each supply mode [3,4]. Multi-level outphasing PA requires multiple supplies and frequent supply switching [5], resulting in substantial DC-DC converter overhead and exacerbated switching noise.
A Transformer-Combined 31.5 dBm Outphasing Power Amplifier in 45 nm LP CMOS With Dynamic Power Control for Back-Off Power Efficiency Enhancement. A transformer-combined fully integrated outphasing class-D PA in 45 nm LP CMOS achieves 31.5 dBm peak output power at 2.4 GHz with 27% peak PAE, and supports over 86 dB of output power range. The PA employs dynamic power control (DPC) whereby sections of the PA are turned on or off dynamically according to the instantaneous signal amplitude to reduce power dissipation, especially at back-off. Dyna...
A filtering technique to lower LC oscillator phase noise Based on a physical understanding of phase-noise mechanisms, a passive LC filter is found to lower the phase-noise factor in a differential oscillator to its fundamental minimum. Three fully integrated LC voltage-controlled oscillators (VCOs) serve as a proof of concept. Two 1.1-GHz VCOs achieve -153 dBc/Hz at 3 MHz offset, biased at 3.7 mA from 2.5 V. A 2.1-GHz VCO achieves -148 dBc/Hz at 15 MHz offset, taking 4 mA from a 2.7-V supply. All oscillators use fully integrated resonators, and the first two exceed discrete transistor modules in figure of merit. Practical aspects and repercussions of the technique are discussed
Measurement issues in galvanic intrabody communication: influence of experimental setup Significance: The need for increasingly energyefficient and miniaturized bio-devices for ubiquitous health monitoring has paved the way for considerable advances in the investigation of techniques such as intrabody communication (IBC), which uses human tissues as a transmission medium. However, IBC still poses technical challenges regarding the measurement of the actual gain through the human body. The heterogeneity of experimental setups and conditions used together with the inherent uncertainty caused by the human body make the measurement process even more difficult. Goal: The objective of this work, focused on galvanic coupling IBC, is to study the influence of different measurement equipments and conditions on the IBC channel. Methods: different experimental setups have been proposed in order to analyze key issues such as grounding, load resistance, type of measurement device and effect of cables. In order to avoid the uncertainty caused by the human body, an IBC electric circuit phantom mimicking both human bioimpedance and gain has been designed. Given the low-frequency operation of galvanic coupling, a frequency range between 10 kHz and 1 MHz has been selected. Results: the correspondence between simulated and experimental results obtained with the electric phantom have allowed us to discriminate the effects caused by the measurement equipment. Conclusion: this study has helped us obtain useful considerations about optimal setups for galvanic-type IBC as well as to identify some of the main causes of discrepancy in the literature.
Next-generation wireless communications concepts and technologies Next-generation wireless (NextG) involves the concept that the next generation of wireless communications will be a major move toward ubiquitous wireless communications systems and seamless high-quality wireless services. This article presents the concepts and technologies involved, including possible innovations in architectures, spectrum allocation, and utilization, in radio communications, networks, and services and applications. These include dynamic and adaptive systems and technologies that provide a new paradigm for spectrum assignment and management, smart resource management, dynamic and fast adaptive multilayer approaches, smart radio, and adaptive networking. Technologies involving adaptive and highly efficient modulation, coding, multiple access, media access, network organization, and networking that can provide ultraconnectivity at high data rates with effective QoS for Next Gare are also described
Nonlinear adaptive control of active suspensions In this paper, a previously developed nonlinear "sliding" control law is applied to an electro-hydraulic suspension system. The controller relies on an accurate model of the suspension system. To reduce the error in the model, a standard parameter adaptation scheme, based on Lyapunov analysis, is introduced. A modified adaptation scheme, which enables the identification of parameters whose values change with regions of the state space, is then presented. These parameters are not restricted to being slowly time-varying as in the standard adaptation scheme; however, they are restricted to being constant or slowly time varying within regions of the state space. The adaptation algorithms are coupled with the control algorithm and the resulting system performance is analyzed experimentally. The performance is determined by the ability of the actuator output to track a specified force. The performance of the active system, with and without the adaptation, is analyzed. Simulation and experimental results show that the active system is better than a passive system in terms of improving the ride quality of the vehicle. Furthermore, both of the adaptive schemes improve performance, with the modified scheme giving the greater improvement in performance.
Design and Analysis of a Class-D Stage With Harmonic Suppression. This paper presents the design and analysis of a low-power Class-D stage in 90 nm CMOS featuring a harmonic suppression technique, which cancels the 3rd harmonic by shaping the output voltage waveform. Only digital circuits are used and the short-circuit current present in Class-D inverter-based output stages is eliminated, relaxing the buffer requirements. Using buffers with reduced drive strengt...
Optimum insertion/deletion point selection for fractional sample rate conversion In this paper, an optimum insertion/deletion point selection algorithm for fractional sample rate conversion (SRC) is proposed. The direct insertion/deletion technique achieves low complexity and low power consumption as compared to the other fractional SRC methods. Using a multiple set insertion/deletion technique is efficient for reduction of distortion caused by the insertion/deletion step. When the conversion factor is (N ±¿)/N, the number of possible patterns of insertion/deletion points and the number of combinations for multiple set inserters/deleters grow as ¿ increases. The proposed algorithm minimizes the distortion due to SRC by selecting the patterns and the combinations.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.104096
0.105567
0.105567
0.070842
0.043088
0.027708
0.010064
0.000033
0
0
0
0
0
0
A Trimodal Wireless Implantable Neural Interface System-on-Chip A wireless and battery-less trimodal neural interface system-on-chip (SoC), capable of 16-ch neural recording, 8-ch electrical stimulation, and 16-ch optical stimulation, all integrated on a 5 × 3 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> chip fabricated in 0.35-μm standard CMOS process. The trimodal SoC is designed to be inductively powered and communicated. The downlink data telemetry utilizes on-off keying pulse-position modulation (OOK-PPM) of the power carrier to deliver configuration and control commands at 50 kbps. The analog front-end (AFE) provides adjustable mid-band gain of 55-70 dB, low/high cut-off frequencies of 1-100 Hz/10 kHz, and input-referred noise of 3.46 μV <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">rms</sub> within 1 Hz-50 kHz band. AFE outputs of every two-channel are digitized by a 50 kS/s 10-bit SAR-ADC, and multiplexed together to form a 6.78 Mbps data stream to be sent out by OOK modulating a 434 MHz RF carrier through a power amplifier (PA) and 6 cm monopole antenna, which form the uplink data telemetry. Optical stimulation has a switched-capacitor based stimulation (SCS) architecture, which can sequentially charge four storage capacitor banks up to 4 V and discharge them in selected μLEDs at instantaneous current levels of up to 24.8 mA on demand. Electrical stimulation is supported by four independently driven stimulating sites at 5-bit controllable current levels in ±(25-775) μA range, while active/passive charge balancing circuits ensure safety. In vivo testing was conducted on four anesthetized rats to verify the functionality of the trimodal SoC.
Compact, Energy-Efficient High-Frequency Switched Capacitor Neural Stimulator With Active Charge Balancing. Safety and energy efficiency are two major concerns for implantable neural stimulators. This paper presents a novel high-frequency, switched capacitor (HFSC) stimulation and active charge balancing scheme, which achieves high energy efficiency and well-controlled stimulation charge in the presence of large electrode impedance variations. Furthermore, the HFSC can be implemented in a compact size w...
A Digitally Dynamic Power Supply Technique for 16-Channel 12 V-Tolerant Stimulator Realized in a 0.18- μm 1.8-V/3.3-V Low-Voltage CMOS Process. A new digitally dynamic power supply technique for 16-channel 12-V-tolerant stimulator is proposed and realized in a 0.18-μm 1.8-V/3.3-V CMOS process. The proposed stimulator uses four stacked transistors as the pull-down switch and pull-up switch to withstand 4 times the nominal supply voltage (4 × V DD). With the dc input voltage of 3.3 V, the regulated three-stage charge pump, which is capable ...
A 200 <formula formulatype="inline"><tex Notation="TeX">$\mu$</tex> </formula>W Eight-Channel EEG Acquisition ASIC for Ambulatory EEG Systems The growing interest toward the improvement of patients&#39; quality of life and the use of medical signals in nonmedical applications such as entertainment, sports, and brain-computerinterfaces, requires the implementation of miniaturized and wireless biopotential acquisition systems with ultralow power dissipation. Therefore, this paper presents the implementation of a complete EEG acquisition ASIC ...
Reliable Next-Generation Cortical Interfaces for Chronic Brain-Machine Interfaces and Neuroscience. This review focuses on recent directions stemming from work by the authors and collaborators in the emerging field of neurotechnology. Neurotechnology has the potential to provide a greater understanding of the structure and function of the complex neural circuits in the brain, as well as impacting the field of brain-machine interfaces (BMI). We envision ultralow-power wireless neural interface sy...
Wireless Multichannel Neural Recording With a 128-Mbps UWB Transmitter for an Implantable Brain-Machine Interfaces. Simultaneous recordings of neural activity at large scale, in the long term and under bio-safety conditions, can provide essential data. These data can be used to advance the technology for brain-machine interfaces in clinical applications, and to understand brain function. For this purpose, we present a new multichannel neural recording system that can record up to 4096-channel (ch) electrocortic...
An Inductively Powered Wireless Neural Recording and Stimulation System for Freely-Behaving Animals. An inductively-powered wireless integrated neural recording and stimulation (WINeRS-8) system-on-a-chip (SoC) that is compatible with the EnerCage-HC2 for wireless/battery-less operation has been presented for neuroscience experiments on freely behaving animals. WINeRS-8 includes a 32-ch recording analog front end, a 4-ch current-controlled stimulator, and a 434 MHz on-off keying data link to an e...
Software complexity measurement Inappropriate use of software complexity measures can have large, damaging effects by rewarding poor programming practices and demoralizing good programmers. Software complexity measures must be critically evaluated to determine the ways in which they can best be used.
Cost Efficient Resource Management in Fog Computing Supported Medical Cyber-Physical System. With the recent development in information and communication technology, more and more smart devices penetrate into people&#39;s daily life to promote the life quality. As a growing healthcare trend, medical cyber-physical systems (MCPSs) enable seamless and intelligent interaction between the computational elements and the medical devices. To support MCPSs, cloud resources are usually explored to pro...
Subspace pursuit for compressive sensing signal reconstruction We propose a new method for reconstruction of sparse signals with and without noisy perturbations, termed the subspace pursuit algorithm. The algorithm has two important characteristics: low computational complexity, comparable to that of orthogonal matching pursuit techniques when applied to very sparse signals, and reconstruction accuracy of the same order as that of linear programming (LP) optimization methods. The presented analysis shows that in the noiseless setting, the proposed algorithm can exactly reconstruct arbitrary sparse signals provided that the sensing matrix satisfies the restricted isometry property with a constant parameter. In the noisy setting and in the case that the signal is not exactly sparse, it can be shown that the mean-squared error of the reconstruction is upper-bounded by constant multiples of the measurement and signal perturbation energies.
A 5-Gb/s ADC-Based Feed-Forward CDR in 65 nm CMOS This paper presents an ADC-based CDR that blindly samples the received signal at twice the data rate and uses these samples to directly estimate the locations of zero crossings for the purpose of clock and data recovery. We successfully confirmed the operation of the proposed CDR architecture at 5 Gb/s. The receiver is implemented in 65 nm CMOS, occupies 0.51 mm(2) and consumes 178.4 mW at 5 Gb/s.
Design Aspects of an Active Electromagnetic Suspension System for Automotive Applications. This paper is concerned with the design aspects of an active electromagnet suspension system for automotive appli- cations which combines a brushless tubular permanent magnet actuator (TPMA) with a passive spring. This system provides for additional stability and safety by performing active roll and pitch control during cornering and braking. Furthermore, elimination of the road irregularities is possible, hence passenger drive comfort is increased. Based upon measurements, static and dynamic specifications of the actuator are derived. The electro magnetic suspension is installed on a quarter car test setup, and the improved performance using roll control is measured and compared to a commercial passive system. An alternative design using a slotless external magnet tubular actuator is proposed which fulfills the derived performance, thermal and volume specifications.
Formal Analysis of Leader Election in MANETs Using Real-Time Maude.
An Energy-Efficient SAR ADC With Event-Triggered Error Correction This brief presents an energy-efficient fully differential 10-bit successive approximation register (SAR) analog-to-digital converter (ADC) with a resolution of 10 bits and a sampling rate of 320 kS/s. The optimal capacitor split and bypass number is analyzed to achieve the highest switching energy efficiency. The common-mode voltage level remains constant during the MSB-capacitor switching cycles. To minimize nonlinearity due to charge averaging voltage offset or DAC array mismatch, an event-triggered error correction method is employed as a redundant cycle for detecting digital code errors within 1 least significant bit (LSB). A test chip was fabricated using the 180-nm CMOS process and occupied a 0.0564-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> core area. Under a regular 0.65-V supply voltage, the ADC achieved an effective number of bits of 9.61 bits and a figure of merit (FOM) of 6.38 fJ/conversion step, with 1.6- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${ \mu }\text{W}$ </tex-math></inline-formula> power dissipation for a low-frequency input. The measured differential and integral nonlinearity results are within 0.30 LSB and 0.43 LSB, respectively.
1.0525
0.06
0.06
0.05
0.05
0.025
0.0125
0
0
0
0
0
0
0
On the Design of Wideband Transformer-Based Fourth Order Matching Networks for E-Band Receivers in 28-nm CMOS. This paper discusses the design of on-chip transformer-based fourth order filters, suitable for mm-Wave highly sensitive broadband low-noise amplifiers (LNAs) and receivers (RXs) implemented in deep-scaled CMOS. Second order effects due to layout parasitics are analyzed and new design techniques are introduced to further enhance the gain-bandwidth product of this class of filters. The design and m...
A 4-Bit, 1.6 GS/s Low Power Flash ADC, Based on Offset Calibration and Segmentation A low power 4-bit, 1.6 GS/s flash ADC is presented. A new power reduction technique which masks the unused blocks in a semi-pipeline chain of latches and encoders is introduced. The proposed circuit determines the unused blocks based on a pre-sensing of the signal. Moreover, a reference voltage generator with very low static power dissipation is used. Novel techniques to reduce the sensitivity to dynamic noise are proposed to suppress the noise effects on the reference generator. The proposed circuit reduces the power consumption by 20 percent compared to the conventional structure when a Nyquist rate OFDM signal is applied. The INL and DNL of the converter are smaller than 0.3 LSB after calibration. The converter offers 3.8 effective number of bits (ENOB) at 1.6 GS/s sampling rate with a low frequency input signal and more than 1.8 GHz effective resolution bandwidth (ERBW) at this sampling rate. The converter consumes mere 15.5 mW from a 1.8 V supply, yielding an FoM of 695 fJ/conversion.step and occupies 0.3 mm2 in a 0.18 μm standard CMOS process.
Integration of Array Antennas in Chip Package for 60-GHz Radios. This paper discusses the integration of array antennas in chip packages for highly integrated 60-GHz radios. First, we evaluate fixed-beam array antennas, showing that most of them suffer from feed network complexity and require sophisticated process techniques to achieve enhanced performance. We describe the grid array antenna and show that is a good choice for fixed-beam array antenna applicatio...
IEEE 802.11ad: directional 60 GHz communication for multi-Gigabit-per-second Wi-Fi [Invited Paper] With the ratification of the IEEE 802.11ad amendment to the 802.11 standard in December 2012, a major step has been taken to bring consumer wireless communication to the millimeter wave band. However, multi-gigabit-per-second throughput and small interference footprint come at the price of adverse signal propagation characteristics, and require a fundamental rethinking of Wi-Fi communication principles. This article describes the design assumptions taken into consideration for the IEEE 802.11ad standard and the novel techniques defined to overcome the challenges of mm-Wave communication. In particular, we study the transition from omnidirectional to highly directional communication and its impact on the design of IEEE 802.11ad.
A 17 mW 3-to-5 GHz Duty-Cycled Vital Sign Detection Radar Transceiver With Frequency Hopping and Time-Domain Oversampling. This paper presents a low power interference-robust radar transceiver architecture for noncontact vital sign detection and mobile healthcare applications. A duty-cycled transceiver design is proposed to significantly reduce power consumption of front-end circuits. Occupying 3-to-5 GHz band with four 500 MHz sub-channels, the radar mitigates the narrowband interference (NBI) problem with the freque...
A Low Power 6-bit Flash ADC With Reference Voltage and Common-Mode Calibration In this paper, a low power 6-bit ADC that uses reference voltage and common-mode calibration is presented. A method for adjusting the differential and common-mode reference voltages used by the ADC to improve its linearity is described. Power dissipation is reduced by using small device sizes in the ADC and relying on calibration to cancel the large non-ideal offsets due to device mismatches. The ADC occupies 0.13 mm2 in 65 nm CMOS and dissipates 12 mW at a sample rate of 800 MS/s from a 1.2 V supply.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
Scratchpad memory: design alternative for cache on-chip memory in embedded systems In this paper we address the problem of on-chip memory selection for computationally intensive applications, by proposing scratch pad memory as an alternative to cache. Area and energy for different scratch pad and cache sizes are computed using the CACTI tool while performance was evaluated using the trace results of the simulator. The target processor chosen for evaluation was AT91M40400. The results clearly establish scratehpad memory as a low power alternative in most situations with an average energy reducation of 40%. Further the average area-time reduction for the seratchpad memory was 46% of the cache memory.
Approximate counting, uniform generation and rapidly mixing Markov chains The paper studies effective approximate solutions to combinatorial counting and unform generation problems. Using a technique based on the simulation of ergodic Markov chains, it is shown that, for self-reducible structures, almost uniform generation is possible in polynomial time provided only that randomised approximate counting to within some arbitrary polynomial factor is possible in polynomial time. It follows that, for self-reducible structures, polynomial time randomised algorithms for counting to within factors of the form (1 + n − β ) are available either for all β ϵ R or for no β ϵ R . A substantial part of the paper is devoted to investigating the rate of convergence of finite ergodic Markov chains, and a simple but powerful characterisation of rapid convergence for a broad class of chains based on a structural property of the underlying graph is established. Finally, the general techniques of the paper are used to derive an almost uniform generation procedure for labelled graphs with a given degree sequence which is valid over a much wider range of degrees than previous methods: this in turn leads to randomised approximate counting algorithms for these graphs with very good asymptotic behaviour.
A theory of nonsubtractive dither A detailed mathematical investigation of multibit quantizing systems using nonsubtractive dither is presented. It is shown that by the use of dither having a suitably chosen probability density function, moments of the total error can be made independent of the system input signal but that statistical independence of the error and the input signals is not achievable. Similarly, it is demonstrated that values of the total error signal cannot generally be rendered statistically independent of one another but that their joint moments can be controlled and that, in particular, the error sequence can be rendered spectrally white. The properties of some practical dither signals are explored, and recommendations are made for dithering in audio, video, and measurement applications. The paper collects all of the important results on the subject of nonsubtractive dithering and introduces important new ones with the goal of alleviating persistent and widespread misunderstandings regarding the technique
Master Data Quality Barriers: An Empirical Investigation Purpose - The development of IT has enabled organizations to collect and store many times more data than they were able to just decades ago. This means that companies are now faced with managing huge amounts of data, which represents new challenges in ensuring high data quality. The purpose of this paper is to identify barriers to obtaining high master data quality.Design/methodology/approach - This paper defines relevant master data quality barriers and investigates their mutual importance through organizing data quality barriers identified in literature into a framework for analysis of data quality. The importance of the different classes of data quality barriers is investigated by a large questionnaire study, including answers from 787 Danish manufacturing companies.Findings - Based on a literature review, the paper identifies 12 master data quality barriers. The relevance and completeness of this classification is investigated by a large questionnaire study, which also clarifies the mutual importance of the defined barriers and the differences in importance in small, medium, and large companies.Research limitations/implications - The defined classification of data quality barriers provides a point of departure for future research by pointing to relevant areas for investigation of data quality problems. The limitations of the study are that it focuses only on manufacturing companies and master data (i.e. not transaction data).Practical implications - The classification of data quality barriers can give companies increased awareness of why they experience data quality problems. In addition, the paper suggests giving primary focus to organizational issues rather than perceiving poor data quality as an IT problem.Originality/value - Compared to extant classifications of data quality barriers, the contribution of this paper represents a more detailed and complete picture of what the barriers are in relation to data quality. Furthermore, the presented classification has been investigated by a large questionnaire study, for which reason it is founded on a more solid empirical basis than existing classifications.
A Dht-Based Discovery Service For The Internet Of Things Current trends towards the Future Internet are envisaging the conception of novel services endowed with context-aware and autonomic capabilities to improve end users' quality of life. The Internet of Things paradigm is expected to contribute towards this ambitious vision by proposing models and mechanisms enabling the creation of networks of "smart things" on a large scale. It is widely recognized that efficient mechanisms for discovering available resources and capabilities are required to realize such vision. The contribution of this work consists in a novel discovery service for the Internet of Things. The proposed solution adopts a peer-to-peer approach for guaranteeing scalability, robustness, and easy maintenance of the overall system. While most existing peer-to-peer discovery services proposed for the IoT support solely exact match queries on a single attribute (i. e., the object identifier), our solution can handle multiattribute and range queries. We defined a layered approach by distinguishing three main aspects: multiattribute indexing, range query support, peer-to-peer routing. We chose to adopt an over-DHT indexing scheme to guarantee ease of design and implementation principles. We report on the implementation of a Proof of Concept in a dangerous goods monitoring scenario, and, finally, we discuss test results for structural properties and query performance evaluation.
27.9 A 200kS/s 13.5b integrated-fluxgate differential-magnetic-to-digital converter with an oversampling compensation loop for contactless current sensing High voltage applications such as electric motor controllers, solar panel power inverters, electric vehicle battery chargers, uninterrupted and switching mode power supplies benefit from the galvanic isolation of contactless current sensors (CCS) [1]. These include magnetic sensors that sense the magnetic field emanating from a current-carrying conductor. The offset and resolution of Hall-effect sensors is in the μT-level [1-3], in contrast to the μT-level accuracy of integrated-fluxgate (IFG) magnetometers [4]. Previously reported sampled-data closed-loop IFG readouts have limited BWs as their sampling frequencies (4) are limited to be less than or equal to the IFG excitation frequency, fEXC [5-7]. This paper describes a differential closed-loop IFG CCS with fs>fEXC. The differential architecture rejects magnetic stray fields and achieves 750x larger BW than the prior closed-loop IFG readouts [6-7] with 10×better offset than the Hall-effect sensors [1-3].
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.2
0.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
0
A new positive linear functional filters design for positive linear systems This paper is concerned with a new time domain design of a positive functional filters for linear time-invariant continuous-time positive multivariable systems, affected by bounded disturbances. Roughly speaking, a positive system is a dynamic system whose output remains in the non-negative orthant whenever the initial state and the input is non-negative. The order of the proposed filter is equal to the dimension of the vector to be estimated. This new approach is based on the unbiasedness of the filter using a Sylvester equation; then the problem is solved via Linear Matrix Inequalities (LMI) to find the optimal gain implemented in the positive filter design. All filter matrices are designed, such that the dynamics of the estimation error is positive and asymptotically stable. A numerical example is given to illustrate our approach.
Network-based static output feedback tracking control for fuzzy-model-based nonlinear systems This paper is concerned with network-based static output feedback tracking control for a class of nonlinear systems that can not be stabilized by a static output feedback controller without a time-delay, but can be stabilized by a delayed static output feedback controller. For such systems, network-induced delay is intentionally introduced in the feedback loop to produce a stable and satisfactory tracking control. The nonlinear network-based control system is represented by an asynchronous T-S fuzzy system with an interval time-varying sawtooth delay due to sample-and-hold behaviors and network-induced delays. A new discontinuous complete Lyapunov-Krasovskii functional, which makes use of the lower bound of network-induced delays, the sawtooth delay and its upper bound, is constructed to derive a delay-dependent criterion on H∞ tracking performance analysis. Since routine relaxation methods in traditional T-S fuzzy systems can not be employed to reduce the conservatism of the stability criterion, a new relaxation method is proposed by using asynchronous constraints on fuzzy membership functions to introduce some free-weighting matrices. Based on the feasibility of the derived criterion, a particle swarm optimization algorithm is presented to search the minimum H∞ tracking performance and static output feedback gains. An illustrative example is provided to show the effectiveness of the proposed method.
A fundamental control performance limit for a class of positive nonlinear systems. A fundamental performance limit is derived for a class of positive nonlinear systems. The performance limit describes the achievable output response in the presence of a positive disturbance and subject to a sign constraint on the allowable input. An explicit optimal input is derived which minimises the maximum output response whilst ensuring that the minimum output response does not fall below a pre-specified lower bound. The result provides a fundamental performance standard against which all control policies, including closed loop schemes, can be compared. Implications of the result are examined in the context of blood glucose regulation for Type 1 Diabetes.
Stability of switched positive linear systems with average dwell time switching. In this paper, the stability analysis problem for a class of switched positive linear systems (SPLSs) with average dwell time switching is investigated. A multiple linear copositive Lyapunov function (MLCLF) is first introduced, by which the sufficient stability criteria in terms of a set of linear matrix inequalities, are given for the underlying systems in both continuous-time and discrete-time contexts. The stability results for the SPLSs under arbitrary switching, which have been previously studied in the literature, can be easily obtained by reducing MLCLF to the common linear copositive Lyapunov function used for the system under arbitrary switching those systems. Finally, a numerical example is given to show the effectiveness and advantages of the proposed techniques.
Stability Analysis and Estimation of Domain of Attraction for Positive Polynomial Fuzzy Systems With Input Saturation AbstractIn this paper, the stability and positivity of positive polynomial fuzzy model based (PPFMB) control system are investigated, in which the positive polynomial fuzzy model and positive polynomial fuzzy controller are allowed to have different premise membership functions from each other. These mismatched premise membership functions can increase the flexibility of controller design; however, it will lead to the conservative results when the stability is analyzed based on the Lyapunov stability theory. To relax the positivity/stability conditions, the improved Taylor-series-membership-functions-dependent (ITSMFD) method is introduced by introducing the sample points information of Taylor-series approximate membership functions, local error information and boundary information of substate space of premise variables into the stability/positivity conditions. Meanwhile, the ITSMFD method is extended to the PPFMB control system with input saturation to relax the estimation of domain of attraction. Finally, simulation examples are presented to verify the feasibility of this method.
A hidden Markov model based control for periodic systems subject to singular perturbations This study analyzes the problem of hidden Markov model based control for periodic systems subject to singular perturbations and Lur'e cone-bounded nonlinearity. Different from the existing time-invariant fading channels, the fading channels are alleviated to be time-varying. Furthermore, to better depict the time-varying property of the fading channels, a novel periodic Markov process framework subject to mean and variance is developed. The highlight of this study lies that the hidden Markov model detector is forwarded to observe the fading channel mode, whose detection probabilities are generalized to be partially recognized. New techniques are developed in dealing with the stochastic Lyapunov functional, and sufficient conditions are gained to ensure the resulting dynamic is stochastically stable. In the sequel, the asynchronous controller parameters are further achieved to reflect the discrepancy between fading channel mode and its detected one. Finally, an application-oriented example is rendered to confirm the effectiveness and applicability of the developed control strategy. (C) 2021 Elsevier B.V. All rights reserved.
Polynomial Fuzzy-Model-Based Control Systems: Stability Analysis via Approximated Membership Functions considering Sector Nonlinearity of Control Input This paper presents the stability analysis of polynomial fuzzy-model-based (PFMB) control systems of which both the polynomial fuzzy model and the polynomial fuzzy controller are allowed to have their own set of premise membership functions. In order to address the input nonlinearity, the control signal is considered to be bounded by a sector with nonlinear bounds. These nonlinear lower and upper bounds of the sector are constructed by combining local bounds using fuzzy blending such that local information of input nonlinearity can be taken into account. With the consideration of imperfectly matched membership functions and input nonlinearity, the applicability of the PFMB control scheme can be further enhanced. To facilitate the stability analysis, a general form of approximated membership functions representing the original ones is introduced. As a result, approximated membership functions can be brought into the stability analysis leading to relaxed stability conditions. Sum of squares (SOS) approach is employed to obtain the stability conditions based on Lyapunov stability theory. Simulation examples are presented to demonstrate the feasibility of the proposed method.
Robust Stability of Impulsive Systems: A Functional-Based Approach An improved functional-based approach for the stability analysis of linear uncertain impulsive systems relying on Lyapunov looped-functionals is provided. Looped functionals are peculiar functionals that allow to encode discrete-time stability criteria into continuous-time conditions and to consider non-monotonic Lyapunov functions along the trajectories of the impulsive system. Unlike usual discrete-time stability conditions, the obtained ones are convex in the system matrices, an important feature for extending the results to uncertain systems. It is emphasized in the examples that the proposed approach can be applied to a class of systems for which existing approaches are inconclusive, notably systems having unstable continuous and discrete dynamics.
Kron Reduction of Graphs with Applications to Electrical Networks Consider a weighted undirected graph and its corresponding Laplacian matrix, possibly augmented with additional diagonal elements corresponding to self-loops. The Kron reduction of this graph is again a graph whose Laplacian matrix is obtained by the Schur complement of the original Laplacian matrix with respect to a specified subset of nodes. The Kron reduction process is ubiquitous in classic ci...
Solving the find-path problem by good representation of free space Free space is represented as a union of (possibly overlapping) generalized cones. An algorithm is presented which efficiently finds good collision-free paths for convex polygonal bodies through space littered with obstacle polygons. The paths are good in the sense that the distance of closest approach to an obstacle over the path is usually far from minimal over the class of topologically equivalent collision-free paths. The algorithm is based on characterizing the volume swept by a body as it is translated and rotated as a generalized cone, and determining under what conditions one generalized cone is a subset of another.
Mobility Management Strategies in Heterogeneous Cognitive Radio Networks Considering the capacity gain of the secondary system and the capacity loss of the primary system caused by the newly accessing user, a distributed binary power allocation (admittance criterion) is proposed in dense cognitive networks including plentiful ...
Charge redistribution loss consideration in optimal charge pump design The charge redistribution loss of capacitors is reviewed, and then employed in the optimal capacitor assignment of charge pumps. The average output voltage is unambiguously defined, and efficiency due to redistribution loss is discussed. Analyses are confirmed by Hspice simulations on charge pumps designed using a 0.35 μm CMOS process.
A Hybrid Threshold Self-Compensation Rectifier For Rf Energy Harvesting This paper presents a novel highly efficient 5-stage RF rectifier in SMIC 65 nm standard CMOS process. To improve power conversion efficiency (PCE) and reduce the minimum input voltage, a hybrid threshold self-compensation approach is applied in this proposed RF rectifier, which combines the gate-bias threshold compensation with the body-effect compensation. The proposed circuit uses PMOSFET in all the stages except for the first stage to allow individual body-bias, which eliminates the need for triple-well technology. The presented RF rectifier exhibits a simulated maximum PCE of 30% at -16.7dBm (20.25 mu W) and produces 1.74V across 0.5 M Omega load resistance. In the circumstances of 1 M Omega load resistance, it outputs 1.5 V DC voltage from a remarkably low input power level of -20.4 dBm (9 mu W) RF input power with PCE of about 25%.
A Bidirectional Neural Interface IC With Chopper Stabilized BioADC Array and Charge Balanced Stimulator. We present a bidirectional neural interface with a 4-channel biopotential analog-to-digital converter (bioADC) and a 4-channel current-mode stimulator in 180 nm CMOS. The bioADC directly transduces microvolt biopotentials into a digital representation without a voltage-amplification stage. Each bioADC channel comprises a continuous-time first-order ΔΣ modulator with a chopper-stabilized OTA input ...
1.071571
0.071258
0.071258
0.066667
0.066667
0.066667
0.030399
0.008364
0
0
0
0
0
0
A 65-nm CMOS 6-bit 2.5-GS/s 7.5-mW 8 $\times$ Time-Domain Interpolating Flash ADC With Sequential Slope-Matching Offset Calibration. A 6-bit 2.5-GS/s 8× dynamic interpolating flash analog-to-digital converter (ADC) with an offset calibration technique for interpolated voltage-to-time converters (VTCs) is presented for high-speed applications. The dynamic-amplifierstructured VTC enables linear zero-crossing (ZX) interpolation in the time domain with an interpolation factor of 8, which reduces the number of front-end VTCs to one-...
A 12-b 10-GS/s Interleaved Pipeline ADC in 28-nm CMOS Technology. A 12-bit 10-GS/s interleaved (IL) pipeline analog-to-digital converter (ADC) is described in this paper. The ADC achieves a signal to noise and distortion ratio (SNDR) of 55 dB and a spurious free dynamic range (SFDR) of 66 dB with a 4-GHz input signal, is fabricated in the 28-nm CMOS technology, and dissipates 2.9 W. Eight pipeline sub-ADCs are interleaved to achieve 10-GS/s sample rate, and mism...
A 13-mW 64-dB SNDR 280-MS/s Pipelined ADC Using Linearized Integrating Amplifiers. A 12-bit pipelined analog-to-digital converter (ADC) using a new integration-based open-loop residue amplifier topology is presented. The amplifier distortion is cancelled with the help of an analog linearization technique based on a tunable input-driven active degeneration. Amplifier gain and nonlinearity errors are detected in background using split-ADC calibration technique. The mismatch betwee...
22.3 A 20GHz-BW 6b 10GS/s 32mW time-interleaved SAR ADC with Master T&H in 28nm UTBB FDSOI technology To sustain ever-growing data traffic, modern wireline communication devices (over copper or fiber optic media) require a high-speed ADC in their receive path to do the digital equalization, or to recover the complex-modulated information. A 6b 10GS/s ADC able to acquire up to 20GHz input signal frequency and showing 5.3 ENOB in Nyquist condition is presented. It is based on a Master Track & Hold (T&H) followed by a time-interleaved synchronous SAR ADC, thus avoiding the need for any kind of skew or bandwidth calibration. Ultra Thin Body and BOX Fully Depleted SOI (UTBB FDSOI) 28nm CMOS technology is used for its fast switching and regenerating capability. The core ADC consumes 32mW from 1V power supply and occupies 0.009mm2 area. The FoM is 81fJ/conversion step.
A 2.6 mW 6 bit 2.2 GS/s Fully Dynamic Pipeline ADC in 40 nm Digital CMOS A 2.2 GS/s 4×-interleaved 6b ADC in 40 nm digital CMOS is presented. Each ADC slice consists of a 1b folding stage followed by a pipelined binary-search sub-ADC using dynamic nonlinear amplifiers for low power consumption and high speed. The folding stage samples the input, removes its common-mode component and rectifies the differential voltage. The pipelined binary-search sub-ADC leverages threshold calibration to correct for amplifier and comparator imperfections, which allows the use of inherently nonlinear dynamic amplifiers. The prototype achieves 31.6 dB SNDR at 2.2 GS/s with a 2 GHz ERBW for 2.6 mW power consumption in an area of 0.03 mm2.
The part-time parliament Recent archaeological discoveries on the island of Paxos reveal that the parliament functioned despite the peripatetic propensity of its part-time legislators. The legislators maintained consistent copies of the parliamentary record, despite their frequent forays from the chamber and the forgetfulness of their messengers. The Paxon parliament's protocol provides a new way of implementing the state machine approach to the design of distributed systems.
GloMoSim: a library for parallel simulation of large-scale wireless networks Abstract Anumber,of library-based parallel ,and sequential network,simulators ,have ,been ,designed. This paper describes a library, called GloMoSim (for Global Mobile system Simulator), for parallel simulation of wireless networks. GloMoSim has been designed to be ,extensible and composable: the communication ,protocol stack for wireless networks is divided into a set of layers, each with its own API. Models of protocols at one layer interact with those at a lower (or higher) layer only via these APIs. The modular,implementation,enables consistent comparison,of multiple,protocols ,at a ,given ,layer. The parallel implementation,of GloMoSim ,can be executed ,using a variety of conservative synchronization protocols, which include,the ,null ,message ,and ,conditional ,event algorithms. This paper describes the GloMoSim library, addresses,a number ,of issues ,relevant ,to its parallelization, and presents a set of experimental results onthe IBM 9076 SP, a distributed memory multi- computer. These experiments use models constructed from the library modules. 1,Introduction The,rapid ,advancement ,in portable ,computing platforms and wireless communication,technology has led tosignificant interest in mobile ,computing ,and mobile networking. Two primary forms of mobile ,computing ,are becoming popular: first, mobile computers continue to heavily use wired network infrastructures.Instead of being hardwired to a single location (or IP address), a computer can,dynamically ,move ,to multiple ,locations ,while maintaining,application transparency. Protocols such as
Long short-term memory. Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.
Towards a Common API for Structured Peer-to-Peer Overlays In this paper, we describe an ongoing effort to define common APIs for structured peer-to-peer overlays and the key abstractions that can be built on them. In doing so, we hope to facilitate independent innovation in overlay protocols, services, and applications, to allow direct experimental comparisons, and to encourage application development by third parties. We provide a snapshot of our efforts and discuss open problems in an effort to solicit feedback from the research community.
Towards a higher-order synchronous data-flow language The paper introduces a higher-order synchronous data-flow language in which communication channels may themselves transport programs. This provides a mean to dynamically reconfigure data-flow processes. The language comes as a natural and strict extension of both lustre and lucy. This extension is conservative, in the sense that a first-order restriction of the language can receive the same semantics.We illustrate the expressivity of the language with some examples, before giving the formal semantics of the underlying calculus. The language is equipped with a polymorphic type system allowing types to be automatically inferred and a clock calculus rejecting programs for which synchronous execution cannot be statically guaranteed. To our knowledge, this is the first higher-order synchronous data-flow language where stream functions are first class citizens.
An almost necessary and sufficient condition for robust stability of closed-loop systems with disturbance observer The disturbance observer (DOB)-based controller has been widely employed in industrial applications due to its powerful ability to reject disturbances and compensate plant uncertainties. In spite of various successful applications, no necessary and sufficient condition for robust stability of the closed loop systems with the DOB has been reported in the literature. In this paper, we present an almost necessary and sufficient condition for robust stability when the Q-filter has a sufficiently small time constant. The proposed condition indicates that robust stabilization can be achieved against arbitrarily large (but bounded) uncertain parameters, provided that an outer-loop controller stabilizes the nominal system, and uncertain plant is of minimum phase.
Cross-layer sensors for green cognitive radio. Green cognitive radio is a cognitive radio (CR) that is aware of sustainable development issues and deals with an additional constraint as regards the decision-making function of the cognitive cycle. In this paper, it is explained how the sensors distributed throughout the different layers of our CR model could help on taking the best decision in order to best contribute to sustainable development.
3.4 A 36Gb/s PAM4 transmitter using an 8b 18GS/S DAC in 28nm CMOS At data rates beyond 10Gb/s, most wireline links employ NRZ signaling. Serial NRZ links as high as 56Gb/s and 60Gb/s have been reported [1]. Nevertheless, as the rate increases, the constraints imposed by the channel, package, and die become more severe and do not benefit from process scaling in the same fashion that circuit design does. Reflections from impedance discontinuities in the PCB and package caused by vias and connectors introduce significant signal loss and distortions at higher frequencies. Even with an ideal channel, at every package-die interface, there is an intrinsic parasitic capacitance due to the pads and the ESD circuit amounting to at least 150fF, and a 50Ω resistor termination at both the transmit and receive ends resulting in an intrinsic pole at 23GHz or lower. In light of all these limitations, serial NRZ signaling beyond 60Gb/s appears suboptimal in terms of both power and performance. Utilizing various modulation techniques such as PAM4, one can achieve a higher spectral efficiency [2]. To enable such transmission formats, high-speed moderate-resolution data converters are required. This paper describes a 36Gb/s transmitter based on an 18GS/s 8b DAC implemented in 28nm CMOS, compliant to the new IEEE802.3bj standard for 100G Ethernet over backplane and copper cables [3].
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.2
0.2
0.2
0.1
0.05
0
0
0
0
0
0
0
0
0
Analysis and Design of N-Path Band-Pass Filters With Negative Base Band Resistance This paper reviews the possibility of adding an active circuit that implements a small signal negative resistor, to the baseband portion of the N-path filter circuit, in order to compensate for losses that are caused due to harmonic products and other parasitic effects. By adding an active circuit inside the baseband part insertion loss can be eliminated reciprocally. Interestingly, in the case of two-port configuration, in addition to the improvement in insertion loss, a lower noise figure can be theoretically achieved as well. The introduction of the negative resistance is analyzed using linear periodically time-variant theory and linear time invariant approximation. The theoretical analysis is verified in simulation and measurement. The circuit implementation consists of a two-port N-path filter, implemented in a 65-nm CMOS process, with a PMOS cross-coupled pair serving as the negative differential resistor. We achieve <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\sim 3.5~dB$ </tex-math></inline-formula> insertion loss improvement at the expense of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$0.64~mW$ </tex-math></inline-formula> power addition, at the frequency range of 0.75–2 GHz.
A 1.2-V Self-Reconfigurable Recursive Mixer With Improved IF Linearity in 130-nm CMOS. A 1.2-V self-reconfigurable recursive mixer structure with improved intermediate frequency (IF) linearity and signal isolation is proposed. For a traditional recursive mixer that reuses the gm stage to amplify both the input radio frequency (RF) and downconverted IF signal, signal isolation and linearity are limited by the signal-reusing structure. In this brief, the self-reconfigurable gm stage i...
Simplified Unified Analysis of Switched-RC Passive Mixers, Samplers, and N-Path Filters Using the Adjoint Network. Recent innovations in software defined CMOS radio transceiver architectures heavily rely on high-linearity switched-RC sampler and passive-mixer circuits, driven by digitally programmable multiphase clocks. Although seemingly simple, the frequency domain analysis of these linear periodically time variant (LPTV) circuits is often deceptively complex. This paper uses the properties of sampled LPTV s...
40-nm CMOS Wideband High-IF Receiver Using a Modified Charge-Sharing Bandpass Filter to Boost Q-Factor. A 40-nm CMOS wideband high-IF receiver is presented in this paper. The low-noise transconductance amplifier (LNTA) uses dual noise cancellation in order to improve its noise figure. The LNTA has also a folded-cascode structure to increase its output impedance and prepare for a current-mode passive mixer. This structure is merged into the output stage of the LNTA, so there is no need for extra tran...
Analysis and Design of a 20-MHz Bandwidth, 50.5-dBm OOB-IIP3, and 5.4-mW TIA for SAW-Less Receivers. A power-efficient transimpedance amplifier with wide channel bandwidth is proposed to meet the stringent linearity requirements of surface acoustic wave-less frequency-division duplexing receivers. A unity-gain loop bandwidth of 1.6 GHz is achieved with low-power dissipation. This was done without using any internal compensation but relying on zeros, both within the operational transconductance am...
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Cognitive radio: brain-empowered wireless communications Cognitive radio is viewed as a novel approach for improving the utilization of a precious natural resource: the radio electromagnetic spectrum. The cognitive radio, built on a software-defined radio, is defined as an intelligent wireless communication system that is aware of its environment and uses the methodology of understanding-by-building to learn from the environment and adapt to statistical variations in the input stimuli, with two primary objectives in mind: · highly reliable communication whenever and wherever needed; · efficient utilization of the radio spectrum. Following the discussion of interference temperature as a new metric for the quantification and management of interference, the paper addresses three fundamental cognitive tasks. 1) Radio-scene analysis. 2) Channel-state estimation and predictive modeling. 3) Transmit-power control and dynamic spectrum management. This work also discusses the emergent behavior of cognitive radio.
Planning as heuristic search In the AIPS98 Planning Contest, the hsp planner showed that heuristic search planners can be competitive with state-of-the-art Graphplan and sat planners. Heuristic search planners like hsp transform planning problems into problems of heuristic search by automatically extracting heuristics from Strips encodings. They differ from specialized problem solvers such as those developed for the 24-Puzzle and Rubik's Cube in that they use a general declarative language for stating problems and a general mechanism for extracting heuristics from these representations. In this paper, we study a family of heuristic search planners that are based on a simple and general heuristic that assumes that action preconditions are independent. The heuristic is then used in the context of best-first and hill-climbing search algorithms, and is tested over a large collection of domains. We then consider variations and extensions such as reversing the direction of the search for speeding node evaluation, and extracting information about propositional invariants for avoiding dead-ends. We analyze the resulting planners, evaluate their performance, and explain when they do best. We also compare the performance of these planners with two state-of-the-art planners, and show that the simplest planner based on a pure best-first search yields the most solid performance over a large set of problems. We also discuss the strengths and limitations of this approach, establish a correspondence between heuristic search planning and Graphplan, and briefly survey recent ideas that can reduce the current gap in performance between general heuristic search planners and specialized solvers.
Probabilistic neural networks By replacing the sigmoid activation function often used in neural networks with an exponential function, a probabilistic neural network (PNN) that can compute nonlinear decision boundaries which approach the Bayes optimal is formed. Alternate activation functions having similar properties are also discussed. A fourlayer neural network of the type proposed can map any input pattern to any number of classifications. The decision boundaries can be modified in real-time using new data as they become available, and can be implemented using artificial hardware “neurons” that operate entirely in parallel. Provision is also made for estimating the probability and reliability of a classification as well as making the decision. The technique offers a tremendous speed advantage for problems in which the incremental adaptation time of back propagation is a significant fraction of the total computation time. For one application, the PNN paradigm was 200,000 times faster than back-propagation.
TaintDroid: An Information-Flow Tracking System for Realtime Privacy Monitoring on Smartphones Today’s smartphone operating systems frequently fail to provide users with visibility into how third-party applications collect and share their private data. We address these shortcomings with TaintDroid, an efficient, system-wide dynamic taint tracking and analysis system capable of simultaneously tracking multiple sources of sensitive data. TaintDroid enables realtime analysis by leveraging Android’s virtualized execution environment. TaintDroid incurs only 32&percnt; performance overhead on a CPU-bound microbenchmark and imposes negligible overhead on interactive third-party applications. Using TaintDroid to monitor the behavior of 30 popular third-party Android applications, in our 2010 study we found 20 applications potentially misused users’ private information; so did a similar fraction of the tested applications in our 2012 study. Monitoring the flow of privacy-sensitive data with TaintDroid provides valuable input for smartphone users and security service firms seeking to identify misbehaving applications.
On receding horizon feedback control Receding horizon feedback control (RHFC) was originally introduced as an easy method for designing stable state-feedback controllers for linear systems. Here those results are generalized to the control of nonlinear autonomous systems, and we develop a performance index which is minimized by the RHFC (inverse optimal control problem). Previous results for linear systems have shown that desirable nonlinear controllers can be developed by making the RHFC horizon distance a function of the state. That functional dependence was implicit and difficult to implement on-line. Here we develop similar controllers for which the horizon distance is an easily computed explicit function of the state.
Cross-layer sensors for green cognitive radio. Green cognitive radio is a cognitive radio (CR) that is aware of sustainable development issues and deals with an additional constraint as regards the decision-making function of the cognitive cycle. In this paper, it is explained how the sensors distributed throughout the different layers of our CR model could help on taking the best decision in order to best contribute to sustainable development.
27.9 A 200kS/s 13.5b integrated-fluxgate differential-magnetic-to-digital converter with an oversampling compensation loop for contactless current sensing High voltage applications such as electric motor controllers, solar panel power inverters, electric vehicle battery chargers, uninterrupted and switching mode power supplies benefit from the galvanic isolation of contactless current sensors (CCS) [1]. These include magnetic sensors that sense the magnetic field emanating from a current-carrying conductor. The offset and resolution of Hall-effect sensors is in the μT-level [1-3], in contrast to the μT-level accuracy of integrated-fluxgate (IFG) magnetometers [4]. Previously reported sampled-data closed-loop IFG readouts have limited BWs as their sampling frequencies (4) are limited to be less than or equal to the IFG excitation frequency, fEXC [5-7]. This paper describes a differential closed-loop IFG CCS with fs>fEXC. The differential architecture rejects magnetic stray fields and achieves 750x larger BW than the prior closed-loop IFG readouts [6-7] with 10×better offset than the Hall-effect sensors [1-3].
Power Efficiency Comparison of Event-Driven and Fixed-Rate Signal Conversion and Compression for Biomedical Applications Energy-constrained biomedical recording systems need power-efficient data converters and good signal compression in order to meet the stringent power consumption requirements of many applications. In literature today, typically a SAR ADC in combination with digital compression is used. Recently, alternative event-driven sampling techniques have been proposed that incorporate compression in the ADC, such as level-crossing A/D conversion. This paper describes the power efficiency analysis of such level-crossing ADC (LCADC) and the traditional fixed-rate SAR ADC with simple compression. A model for the power consumption of the LCADC is derived, which is then compared to the power consumption of the SAR ADC with zero-order hold (ZOH) compression for multiple biosignals (ECG, EMG, EEG, and EAP). The LCADC is more power efficient than the SAR ADC up to a cross-over point in quantizer resolution (for example 8 bits for an EEG signal). This cross-over point decreases with the ratio of the maximum to average slope in the signal of the application. It also changes with the technology and design techniques used. The LCADC is thus suited for low to medium resolution applications. In addition, the event-driven operation of an LCADC results in fewer data to be transmitted in a system application. The event-driven LCADC without timer and with single-bit quantizer achieves a reduction in power consumption at system level of two orders of magnitude, an order of magnitude better than the SAR ADC with ZOH compression. At system level, the LCADC thus offers a big advantage over the SAR ADC.
1.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
0
0
Analysis and Design of a Thermoelectric Energy Harvesting System With Reconfigurable Array of Thermoelectric Generators for IoT Applications. In this paper, a novel thermoelectric energy harvesting system with a reconfigurable array of thermoelectric generators (TEGs), which requires neither an inductor nor a flying capacitor, is proposed. The proposed architecture can accomplish maximum power point tracking (MPPT) and voltage conversion simultaneously via the reconfiguration of the TEG array, and demonstrate significantly improved powe...
A 0.36-V 5-MS/s Time-Mode Flash ADC With Dickson-Charge-Pump-Based Comparators in 28-nm CMOS Dickson charge-pump (CP) is proposed here to realize a voltage-to-time converter (VTC) within an array of time-domain comparators of a 54-level time-mode subthreshold flash ADC operating at 0.36V. Two identical CPs in each of the 54 ADC slices convert the input and reference voltages into variable-slope ramp signals fed into comparators for `flash&#39; quantization. Considering the fact that the compa...
Fully-Integrated Reconfigurable Charge Pump With Two-Dimensional Frequency Modulation for Self-Powered Internet-of-Things Applications In this paper, we propose a fully-integrated reconfigurable charge pump in a 0.18-μm CMOS process; this converter is applicable for self-powered Internet-of-Things applications. The proposed charge pump uses a two-dimensional frequency modulation technique, which combines both the pulse-frequency modulation (PFM) and pulse-skip modulation (PSM) techniques. The PFM technique adjusts the operating frequency of the converter according to the variations in the load current, and the PSM technique regulates the output voltage. The proposed two-dimensional frequency modulation technique can improve the overall power conversion efficiency and the response time of the converter under light load conditions. A photovoltaic cell was chosen as the input source of the proposed converter. To adapt to the variations in the output voltage of a photovoltaic cell under different light illumination intensities, we built a reconfigurable converter core with multiple power conversion ratios of 2, 2.5, and 3 for the regulated output voltage of 1.2 V when the input voltage ranged from 0.53 V to 0.7 V. Our measurement results prove that the proposed capacitive power converter could achieve a peak power conversion efficiency of 80.8%, and the efficiency was more than 70% for the load current that ranged from 10 μA to 620 μA.
Charge Pumps for Ultra-Low-Power Applications: Analysis, Design, and New Solutions In this brief a brief tutorial of charge pump topologies for management of self-powered nodes in ultra-low-power applications, such as Internet of Things nodes, is presented. It aims to provide to the designer guidelines to choose the most suitable solution, according to the given design specifications. After a brief historic evolution, the main design equations of charge pumps and a collection of the recent proposed topologies and regulation schemes are discussed, allowing for qualitative insight into the state-of-the-art of integrated topologies.
Algebraic Series-Parallel-Based Switched-Capacitor DC-DC Boost Converter With Wide Input Voltage Range and Enhanced Power Density. This article presents an algebraic series-parallel (ASP) topology for fully integrated switched-capacitor (SC) dc-dc boost converters with flexible fractional voltage conversion ratios (VCRs). By elaborating the output voltage (VOUT) expression into a specific algebraic form, the proposed ASP can achieve improvements on both the charge sharing and bottom-plate-parasitic losses while maintaining th...
Chord: a scalable peer-to-peer lookup protocol for internet applications A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.
Computing size-independent matrix problems on systolic array processors A methodology to transform dense to band matrices is presented in this paper. This transformation, is accomplished by triangular blocks partitioning, and allows the implementation of solutions to problems with any given size, by means of contraflow systolic arrays, originally proposed by H.T. Kung. Matrix-vector and matrix-matrix multiplications are the operations considered here.The proposed transformations allow the optimal utilization of processing elements (PEs) of the systolic array when dense matrix are operated. Every computation is made inside the array by using adequate feedback. The feedback delay time depends only on the systolic array size.
A 12 bit 2.9 GS/s DAC With IM3 $ ≪ -$ 60 dBc Beyond 1 GHz in 65 nm CMOS A 12 bit 2.9 GS/s current-steering DAC implemented in 65 nm CMOS is presented, with an IM3 < ¿-60 dBc beyond 1 GHz while driving a 50 ¿ load with an output swing of 2.5 Vppd and dissipating a power of 188 mW. The SFDR measured at 2.9  GS/s is better than 60 dB beyond 340 MHz while the SFDR measured at 1.6 GS/s is better than 60 dB beyond 440 MHz. The increase in performance at high-frequencies, co...
The M-Machine multicomputer The M-Machine is an experimental multicomputer being developed to test architectural concepts motivated by the constraints of modern semiconductor technology and the demands of programming systems. The M-Machine computing nodes are connected with a 3-D mesh network; each node is a multithreaded processor incorporating 9 function units, on-chip cache, and local memory. The multiple function units are used to exploit both instruction-level and thread-level parallelism. A user accessible message passing system yields fast communication and synchronization between nodes. Rapid access to remote memory is provided transparently to the user with a combination of hardware and software mechanisms. This paper presents the architecture of the M-Machine and describes how its mechanisms attempt to maximize both single thread performance and overall system throughput. The architecture is complete and the MAP chip, which will serve as the M-Machine processing node, is currently being implemented.
SPONGENT: a lightweight hash function This paper proposes spongent - a family of lightweight hash functions with hash sizes of 88 (for preimage resistance only), 128, 160, 224, and 256 bits based on a sponge construction instantiated with a present-type permutation, following the hermetic sponge strategy. Its smallest implementations in ASIC require 738, 1060, 1329, 1728, and 1950 GE, respectively. To our best knowledge, at all security levels attained, it is the hash function with the smallest footprint in hardware published so far, the parameter being highly technology dependent. spongent offers a lot of flexibility in terms of serialization degree and speed. We explore some of its numerous implementation trade-offs. We furthermore present a security analysis of spongent. Basing the design on a present-type primitive provides confidence in its security with respect to the most important attacks. Several dedicated attack approaches are also investigated.
Noise Analysis and Simulation Method for a Single-Slope ADC With CDS in a CMOS Image Sensor Many mixed-signal circuits are nonlinear time-varying systems whose noise estimation cannot be obtained from the conventional frequency domain noise simulation (FNS). Although the transient noise simulation (TNS) supported by a commercial simulator takes into account nonlinear time-varying characteristics of the circuit, its simulation time is unacceptably long to obtain meaningful noise estimatio...
A Delay-Locked Loop Synchronization Scheme for High-Frequency Multiphase Hysteretic DC-DC Converters This paper reports a delay-locked loop (DLL) based hysteretic controller for high-frequency multiphase dc-dc buck converters. The DLL control loop employs the switching frequency of a hysteretic comparator as reference to automatically synchronize the remaining phases and eliminate the need for external synchronization. A dedicated duty cycle control loop is used to enable current sharing and ripple cancellation. We demonstrate a four-phase high-frequency buck converter that operates at 25-70 MHz with fast hysteretic control and output conversion range of 17.5%-80%. The converter achieves an efficiency of 83% at 2 W and 80% at 3.3 W. The circuit has been implemented in standard 0.5 mum 5 V CMOS process.
ΣΔ ADC with fractional sample rate conversion for software defined radio receiver.
A VCO-Based Nonuniform Sampling ADC Using a Slope-Dependent Pulse Generator This paper presents a voltage-controlled oscillator (VCO)-based nonuniform sampling analog-to-digital converter (ADC) as an alternative to the level-crossing (LC)-based converters for digitizing biopotential signals. This work aims to provide a good signal-to-noise-and-distortion ratio at a low average sampling rate. In the proposed conversion method, a slope-dependent pulse generation block is used to provide a variable sample rate adjusted according to the input signal's slope. Simulation results show that the introduced method meets a target reconstruction quality with a sampling rate approaching 92 Sps, while on the same MIT-BIH Arrhythmia N 106 ECG benchmark, the classic LC-based approach requires a sampling rate higher than 500 Sps. The benefits of the proposed method are more remarkable when the input signal is very noisy. The proposed ADC achieves a compression ratio close to 4, but with only 5.4% root-mean-square difference when tested using the MIT-BIH Arrhythmia Database.
1.2
0.1
0.1
0.1
0.033333
0
0
0
0
0
0
0
0
0
Analysis and Design of Inductorless Wideband Low-Noise Amplifier With Noise Cancellation Technique. This paper deals with the fabrication of an inductorless wideband low-noise amplifier (LNA). The LNA includes two branches in parallel: a common-source (CS) path and a common-gate (CG) path. The CS path is responsible for providing enough power gain, while the CG path is used to achieve the input impedance matching. To eliminate the noise contribution of the CG path, the noise cancellation technique is applied. Therefore, the overall noise figure (NF) is improved. The phase mismatch between the two paths is also quantitatively analyzed to investigate its effect on gain and NF. The analytical results agree well with the simulation results. The LNA has been fabricated by a commercial 0.18-mu m CMOS process. The measurement results show that the LNA has achieved a maximum gain of 14.5 dB with 1.7-GHz 3-dB gain bandwidth and a minimum NF of 3 dB. The tested input 1-dB gain compression point (IP1 dB) is -10.4 dBm at 1 GHz and the input third-order intercept point is 0.25 dBm. With 1.8-V supply, the LNA draws only 6-mA dc current.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Cache operations by MRU change The performance of set associative caches is analyzed. The method used is to group the cache lines into regions according to their positions in the replacement stacks of a cache, and then to observe how the memory access of a CPU is distributed over these regions. Results from the preserved CPU traces show that the memory accesses are heavily concentrated on the most recently used (MRU) region in the cache. The concept of MRU change is introduced; the idea is to use the event that the CPU accesses a non-MRU line to approximate the time the CPU is changing its working set. The concept is shown to be useful in many aspects of cache design and performance evaluation, such as comparison of various replacement algorithms, improvement of prefetch algorithms, and speedup of cache simulation.
CheckMate - Automated Synthesis of Hardware Exploits and Security Litmus Tests. Recent research has uncovered a broad class of security vulnerabilities in which confidential data is leaked through programmer-observable microarchitectural state. In this paper, we present CheckMate, a rigorous approach and automated tool for determining if a microarchitecture is susceptible to specified classes of security exploits, and for synthesizing proof-of-concept exploit code when it is. Our approach adopts "microarchitecturally happens-before" (μhb) graphs which prior work designed to capture the subtle orderings and interleavings of hardware execution events when programs run on a microarchitecture. CheckMate extends μhb graphs to facilitate modeling of security exploit scenarios and hardware execution patterns indicative of classes of exploits. Furthermore, it leverages relational model finding techniques to enable automated exploit program synthesis from microarchitecture and exploit pattern specifications. As a case study, we use CheckMate to evaluate the susceptibility of a speculative out-of-order processor to Flush+Reload cache side-channel attacks. The automatically synthesized results are programs representative of Meltdown and Spectre attacks. We then evaluate the same processor on its susceptibility to a different timing side-channel attack: Prime+Probe. Here, Check-Mate synthesized new exploits that are similar to Meltdown and Spectre in that they leverage speculative execution, but unique in that they exploit distinct microarchitectural behaviors---speculative cache line invalidations rather than speculative cache pollution---to form a side-channel. Most importantly, our results validate the CheckMate approach to formal hardware security verification and the ability of the CheckMate tool to detect real-world vulnerabilities.
Reverse Engineering the Stream Prefetcher for Profit Micro-architectural attacks exploit timing channels at different micro-architecture units. Some of the micro-architecture units like cache automatically provide the timing difference (the difference between a hit and a miss). However, there are other units that are not documented, and their influence on the timing difference is not fully understood. One such micro-architecture unit is an L2 hardware prefetcher named Streamer. In this paper, we reverse-engineer the Stream prefetcher, which is commercially available in the Intel machines. We perform a set of experiments and provide our observations and insights. Further, we use these observations to construct a cross-thread covert channel using the Stream prefetcher, with an accuracy of 91.3% and a bandwidth of 54.44 KBps.
Abusing Cache Line Dirty States to Leak Information in Commercial Processors Caches have been used to construct various types of covert and side channels to leak information. Most existing cache channels exploit the timing difference between cache hits and cache misses. However, we introduce a new and broader classification of cache covert channel attacks: Hit+Miss, Hit+Hit, and Miss+Miss. We highlight that cache misses (or cache hits) for cache lines in different states may have more significant time differences, and these can be used as timing channels. Based on this classification, we propose a new stable and stealthy Miss+Miss cache channel. Write-back caches are widely deployed in modern processors. This paper presents in detail a way in which replacement latency differences can be used to construct timing-based channels (called WB channels) to leak information in a write-back cache. Any modification to a cache line by a sender will set it to the dirty state, and the receiver can observe this through measuring the latency of replacing this cache set. We also demonstrate how senders could exploit a different number of dirty cache lines in a cache set to improve transmission bandwidth with symbols encoding multiple bits. The peak transmission bandwidths of the WB channels in commercial systems can vary between 1300 and 4400 kbps per cache set in a hyper-threaded setting without shared memory between the sender and the receiver. In contrast to most existing cache channels, which always target specific memory addresses, the new WB channels focus on the cache set and cache line states, making it difficult for the channel to be disturbed by other processes on the core, and they can still work in a cache using a random replacement policy. We also analyzed the stealthiness of WB channels from the perspective of the number of cache loads and cache miss rates. We discuss and evaluate possible defenses. The paper finishes by discussing various forms of side-channel attack.
Speculative Dereferencing: Reviving Foreshadow In this paper, we provide a systematic analysis of the root cause of the prefetching effect observed in previous works and show that its attribution to a prefetching mechanism is incorrect in all previous works, leading to incorrect conclusions and incomplete defenses. We show that the root cause is speculative dereferencing of user-space registers in the kernel. This new insight enables the first end-to-end Foreshadow (L1TF) exploit targeting non-L1 data, despite Foreshadow mitigations enabled, a novel technique to directly leak register values, and several side-channel attacks. While the L1TF effect is mitigated on the most recent Intel CPUs, all other attacks we present still work on all Intel CPUs and on CPUs by other vendors previously believed to be unaffected.
Prime+Scope: Overcoming the Observer Effect for High-Precision Cache Contention Attacks ABSTRACTModern processors expose software to information leakage through shared microarchitectural state. One of the most severe leakage channels is cache contention, exploited by attacks referred to as PRIME+PROBE, which can infer fine-grained memory access patterns while placing only limited assumptions on attacker capabilities. In this work, we strengthen the cache contention channel with a near-optimal time resolution. We propose PRIME+SCOPE, a cross-core cache contention attack that performs back-to-back cache contention measurements that access only a single cache line. It offers a time resolution of around 70 cycles (25ns), while maintaining the wide applicability of PRIME+PROBE. To enable such a rapid measurement, we rely on the deterministic nature of modern replacement policies and their (non-)interaction across cache levels. We provide a methodology to, essentially, prepare multiple cache levels simultaneously, and apply it to Intel processors with both inclusive and non-inclusive cache hierarchies. We characterize the resolution of PRIME+SCOPE, and confirm it with a cross-core covert channel (capacity up to 3.5 Mbps, no shared memory) and an improved attack on AES T-tables. Finally, we use the properties underlying PRIME+SCOPE to bootstrap the construction of the eviction sets needed for the attack. The resulting routine outperforms state-of-the-art techniques by two orders of magnitude. Ultimately, our work shows that interference through cache contention can provide richer temporal precision than state-of-the-art attacks that directly interact with monitored memory addresses.
Theory and Practice of Finding Eviction Sets Many micro-architectural attacks rely on the capability of an attacker to efficiently find small eviction sets: groups of virtual addresses that map to the same cache set. This capability has become a decisive primitive for cache side-channel, rowhammer, and speculative execution attacks. Despite their importance, algorithms for finding small eviction sets have not been systematically studied in the literature. In this paper, we perform such a systematic study. We begin by formalizing the problem and analyzing the probability that a set of random virtual addresses is an eviction set. We then present novel algorithms, based on ideas from threshold group testing, that reduce random eviction sets to their minimal core in linear time, improving over the quadratic state-of-the-art. We complement the theoretical analysis of our algorithms with a rigorous empirical evaluation in which we identify and isolate factors that affect their reliability in practice, such as adaptive cache replacement strategies and TLB thrashing. Our results indicate that our algorithms enable finding small eviction sets much faster than before, and under conditions where this was previously deemed impractical.
Efficient Cache Attacks on AES, and Countermeasures We describe several software side-channel attacks based on inter-process leakage through the state of the CPU's memory cache. This leakage reveals memory access patterns, which can be used for cryptanalysis of cryptographic primitives that employ data-dependent table lookups. The attacks allow an unprivileged process to attack other processes running in parallel on the same processor, despite partitioning methods such as memory protection, sandboxing, and virtualization. Some of our methods require only the ability to trigger services that perform encryption or MAC using the unknown key, such as encrypted disk partitions or secure network links. Moreover, we demonstrate an extremely strong type of attack, which requires knowledge of neither the specific plaintexts nor ciphertexts and works by merely monitoring the effect of the cryptographic process on the cache. We discuss in detail several attacks on AES and experimentally demonstrate their applicability to real systems, such as OpenSSL and Linux's dm-crypt encrypted partitions (in the latter case, the full key was recovered after just 800 writes to the partition, taking 65 milliseconds). Finally, we discuss a variety of countermeasures which can be used to mitigate such attacks.
The GPU Computing Era GPU computing is at a tipping point, becoming more widely used in demanding consumer applications and high-performance computing. This article describes the rapid evolution of GPU architectures—from graphics processors to massively parallel many-core multiprocessors, recent developments in GPU computing architectures, and how the enthusiastic adoption of CPU+GPU coprocessing is accelerating parallel applications.
Scalable Fault-Tolerant Aggregation in Large Process Groups Abstract: This paper discusses fault-tolerant, scalable solutions to the problem of accurately and scalably calculating global aggregate functions in large process groups communicating over unreliable networks. These groups could represent sensors or processes communicating over a network that is either fixed (e.g., the Internet) or dynamic (eg., multihop ad-hoc). Group members are prone to failures. The ability to evaluate global aggregate properties (eg., the average of sensor temperature readings) is important for higher-level coordination activities in such large groups. We first define the setting and problem, laying down metrics to evaluate different algorithms for the same. We discuss why the usual approaches to solve this problem are unviable and unscalable over an unreliable network prone to message delivery failures and crash failures. We then propose a technique to impose an abstract hierarchy on such large groups, describing how this hierarchy can be made to mirror the network topology. We discuss several alternatives to use this technique to solve the global aggregate function evaluation problem. Finally, we present a protocol based on gossiping that uses this hierarchical technique. We present mathematical analysis and performance results to validate the robustness, efficiency and accuracy of the Hierarchical Gossiping algorithm.
A g/sub m//I/sub D/ based methodology for the design of CMOS analog circuits and its application to the synthesis of a silicon-on-insulator micropower OTA A new design methodology based on a unified treatment of all the regions of operation of the MOS transistor is proposed. It is intended for the design of CMOS analog circuits and especially suited for low power circuits where the moderate inversion region often is used because it provides a good compromise between speed and power consumption. The synthesis procedure is based on the relation betwee...
An Electro-Magnetic Energy Harvesting System With 190 nW Idle Mode Power Consumption for a BAW Based Wireless Sensor Node. State-of-the-art wireless sensor nodes are mostly supplied by batteries. Such systems have the disadvantage that they are not maintenance free because of the limited lifetime of batteries. Instead, wireless sensor nodes or related devices can be remotely powered. To increase the operating range and applicability of these remotely powered devices an electro-magnetic energy harvester is developed in a 0.13 mu m low cost CMOS technology. This paper presents an energy harvesting system that converts RF power to DC power to supply wireless sensor nodes, active transmitters or related systems with a power consumption up to the mW range. This energy harvesting system is used to power a wireless sensor node from the 900 MHz RF field. The wireless sensor node includes an on-chip temperature sensor and a bulk acoustic wave (BAW) based transmitter. The BAW resonator reduces the startup time of the transmitter to about 2 mu s which reduces the amount of energy needed in one transmission cycle. The maximum output power of the transmitter is 5.4 dBm. The chip contains an ultra-low-power control unit and consumes only 190 nW in idle mode. The required input power is -19.7 dBm.
Wireless sensing and vibration control with increased redundancy and robustness design. Control systems with long distance sensor and actuator wiring have the problem of high system cost and increased sensor noise. Wireless sensor network (WSN)-based control systems are an alternative solution involving lower setup and maintenance costs and reduced sensor noise. However, WSN-based control systems also encounter problems such as possible data loss, irregular sampling periods (due to the uncertainty of the wireless channel), and the possibility of sensor breakdown (due to the increased complexity of the overall control system). In this paper, a wireless microcontroller-based control system is designed and implemented to wirelessly perform vibration control. The wireless microcontroller-based system is quite different from regular control systems due to its limited speed and computational power. Hardware, software, and control algorithm design are described in detail to demonstrate this prototype. Model and system state compensation is used in the wireless control system to solve the problems of data loss and sensor breakdown. A positive position feedback controller is used as the control law for the task of active vibration suppression. Both wired and wireless controllers are implemented. The results show that the WSN-based control system can be successfully used to suppress the vibration and produces resilient results in the presence of sensor failure.
Neuropixels Data-Acquisition System: A Scalable Platform for Parallel Recording of 10,000+ Electrophysiological Signals. Although CMOS fabrication has enabled a quick evolution in the design of high-density neural probes and neural-recording chips, the scaling and miniaturization of the complete data-acquisition systems has happened at a slower pace. This is mainly due to the complexity and the many requirements that change depending on the specific experimental settings. In essence, the fundamental challenge of a n...
1.073849
0.066667
0.066667
0.066667
0.066667
0.033333
0.012089
0.00073
0
0
0
0
0
0
Graph-Theoretic Analysis Of Belief System Dynamics Under Logic Constraints Opinion formation cannot be modeled solely as an ideological deduction from a set of principles; rather, repeated social interactions and logic constraints among statements are consequential in the construct of belief systems. We address three basic questions in the analysis of social opinion dynamics: (i) Will a belief system converge? (ii) How long does it take to converge? (iii) Where does it converge? We provide graph-theoretic answers to these questions for a model of opinion dynamics of a belief system with logic constraints. Our results make plain the implicit dependence of the convergence properties of a belief system on the underlying social network and on the set of logic constraints that relate beliefs on different statements. Moreover, we provide an explicit analysis of a variety of commonly used large-scale network models.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Concurrent Data Structures for Near-Memory Computing. The performance gap between memory and CPU has grown exponentially. To bridge this gap, hardware architects have proposed near-memory computing (also called processing-in-memory, or PIM), where a lightweight processor (called a PIM core) is located close to memory. Due to its proximity to memory, a memory access from a PIM core is much faster than that from a CPU core. New advances in 3D integration and die-stacked memory make PIM viable in the near future. Prior work has shown significant performance improvements by using PIM for embarrassingly parallel and data-intensive applications, as well as for pointer-chasing traversals in sequential data structures. However, current server machines have hundreds of cores, and algorithms for concurrent data structures exploit these cores to achieve high throughput and scalability, with significant benefits over sequential data structures. Thus, it is important to examine how PIM performs with respect to modern concurrent data structures and understand how concurrent data structures can be developed to take advantage of PIM. This paper is the first to examine the design of concurrent data structures for PIM. We show two main results: (1) naive PIM data structures cannot outperform state-of-the-art concurrent data structures, such as pointer-chasing data structures and FIFO queues, (2) novel designs for PIM data structures, using techniques such as combining, partitioning and pipelining, can outperform traditional concurrent data structures, with a significantly simpler design.
GP-SIMD Processing-in-Memory GP-SIMD, a novel hybrid general-purpose SIMD computer architecture, resolves the issue of data synchronization by in-memory computing through combining data storage and massively parallel processing. GP-SIMD employs a two-dimensional access memory with modified SRAM storage cells and a bit-serial processing unit per each memory row. An analytic performance model of the GP-SIMD architecture is presented, comparing it to associative processor and to conventional SIMD architectures. Cycle-accurate simulation of four workloads supports the analytical comparison. Assuming a moderate die area, GP-SIMD architecture outperforms both the associative processor and conventional SIMD coprocessor architectures by almost an order of magnitude while consuming less power.
Evolution of Memory Architecture Computer memories continue to serve the role that they first served in the electronic discrete variable automatic computer (EDVAC) machine documented by John von Neumann, namely that of supplying instructions and operands for calculations in a timely manner. As technology has made possible significantly larger and faster machines with multiple processors, the relative distance in processor cycles ...
Rebooting the Data Access Hierarchy of Computing Systems We have been experiencing two very important movements in computing. On the one hand, a tremendous amount of resource has been invested into innovative applications such as first-principle-based methods, deep learning and cognitive computing. On the other hand, the industry has been taking a technological path where application performance and energy efficiency vary by more than two orders of magnitude depending on their parallelism, heterogeneity, and locality. We envision that a "perfect storm" is coming because of the interaction between these two movements. Many of these new and high-valued applications need to touch a very large amount of data with little data reuse and data movement has become the dominating factor for both power and performance of these applications. It will be critical to match the compute throughput to the data access bandwidth and to locate the compute near data. Much has been and continuously needs to be learned about algorithms, languages, compilers and hardware architecture in this movement. What are the killer applications that may become the new driver for future technology development? How hard is it to program existing systems to address the data movement issues today? How will we program these systems in the future? How will innovations in memory devices present further opportunities and challenges in designing new systems? What is the impact on long-term software engineering cost of applications? In this paper, we present some lessons learned as we design the IBM-Illinois C3SR (Center for Cognitive Computing Systems Research) Erudite system inside this perfect storm.
Hyper-Ap: Enhancing Associative Processing Through A Full-Stack Optimization Associative processing (AP) is a promising PIM paradigm that overcomes the von Neumann bottleneck (memory wall) by virtue of a radically different execution model. By decomposing arbitrary computations into a sequence of primitive memory operations (i.e., search and write), AP’s execution model supports concurrent SIMD computations in-situ in the memory array to eliminate the need for data movement. This execution model also provides a native support for flexible data types and only requires a minimal modification on the existing memory design (low hardware complexity). Despite these advantages, the execution model of AP has two limitations that substantially increase the execution time, i.e., 1) it can only search a single pattern in one search operation and 2) it needs to perform a write operation after each search operation. In this paper, we propose the Highly Performant Associative Processor (Hyper- AP) to fully address the aforementioned limitations. The core of Hyper- AP is an enhanced execution model that reduces the number of search and write operations needed for computations, thereby reducing the execution time. This execution model is generic and improves the performance for both CMOS-based and RRAM-based AP, but it is more beneficial for the RRAMbased AP due to the substantially reduced write operations. We then provide complete architecture and micro-architecture with several optimizations to efficiently implement Hyper-AP. In order to reduce the programming complexity, we also develop a compilation framework so that users can write C-like programs with several constraints to run applications on Hyper- AP. Several optimizations have been applied in the compilation process to exploit the unique properties of Hyper- AP. Our experimental results show that, compared with the recent work IMP, Hyper- AP achieves up to 54×/4.4× better power-/area-efficiency for various representative arithmetic operations. For the evaluated benchmarks, Hyper-AP achieves 3.3× speedup and 23.8× energy reduction on average compared with IMP. Our evaluation also confirms that the proposed execution model is more beneficial for the RRAM-based AP than its CMOS-based counterpart.
3.2 Zen: A next-generation high-performance ×86 core Codenamed “Zen”, AMD's next-generation, high-performance ×86 core targets server, desktop, and mobile client applications. Utilizing Global Foundries' energy-efficient 14nm LPP FinFET process, the 44mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> Zen core complex unit (CCX) has 1.4B transistors and contains a shared 8MB L3 cache and four cores (Fig. 3.2.7). The 7mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> Zen core contains a dedicated 0.5MB L2 cache, 32KB L1 data cache, and 64KB L1 instruction cache. Each core has a digital low drop-out (LDO) voltage regulator and digital frequency synthesizer (DFS) to independently vary frequency and voltage across power states.
FPGA-based Near-Memory Acceleration of Modern Data-Intensive Applications Modern data-intensive applications demand high computational capabilities with strict power constraints. Unfortunately, such applications suffer from a significant waste of both execution cycles and energy in current computing systems due to the costly data movement between the computation units and the memory units. Genome analysis and weather prediction are two examples of such applications. Rec...
Accelerating read mapping with FastHASH. With the introduction of next-generation sequencing (NGS) technologies, we are facing an exponential increase in the amount of genomic sequence data. The success of all medical and genetic applications of next-generation sequencing critically depends on the existence of computational techniques that can process and analyze the enormous amount of sequence data quickly and accurately. Unfortunately, the current read mapping algorithms have difficulties in coping with the massive amounts of data generated by NGS.We propose a new algorithm, FastHASH, which drastically improves the performance of the seed-and-extend type hash table based read mapping algorithms, while maintaining the high sensitivity and comprehensiveness of such methods. FastHASH is a generic algorithm compatible with all seed-and-extend class read mapping algorithms. It introduces two main techniques, namely Adjacency Filtering, and Cheap K-mer Selection.We implemented FastHASH and merged it into the codebase of the popular read mapping program, mrFAST. Depending on the edit distance cutoffs, we observed up to 19-fold speedup while still maintaining 100% sensitivity and high comprehensiveness.
Co-designing accelerators and SoC interfaces using gem5-Aladdin. Increasing demand for power-efficient, high-performance computing has spurred a growing number and diversity of hardware accelerators in mobile and server Systems on Chip (SoCs). This paper makes the case that the co-design of the accelerator microarchitecture with the system in which it belongs is critical to balanced, efficient accelerator microarchitectures. We find that data movement and coherence management for accelerators are significant yet often unaccounted components of total accelerator runtime, resulting in misleading performance predictions and inefficient accelerator designs. To explore the design space of accelerator-system co-design, we develop gem5-Aladdin, an SoC simulator that captures dynamic interactions between accelerators and the SoC platform, and validate it to within 6% against real hardware. Our co-design studies show that the optimal energy-delay-product (EDP) of an accelerator microarchitecture can improve by up to 7.4X when system-level effects are considered compared to optimizing accelerators in isolation.
SpiNNaker: A 1-W 18-Core System-on-Chip for Massively-Parallel Neural Network Simulation The modelling of large systems of spiking neurons is computationally very demanding in terms of processing power and communication. SpiNNaker - Spiking Neural Network architecture - is a massively parallel computer system designed to provide a cost-effective and flexible simulator for neuroscience experiments. It can model up to a billion neurons and a trillion synapses in biological real time. The basic building block is the SpiNNaker Chip Multiprocessor (CMP), which is a custom-designed globally asynchronous locally synchronous (GALS) system with 18 ARM968 processor nodes residing in synchronous islands, surrounded by a lightweight, packet-switched asynchronous communications infrastructure. In this paper, we review the design requirements for its very demanding target application, the SpiNNaker micro-architecture and its implementation issues. We also evaluate the SpiNNaker CMP, which contains 100 million transistors in a 102-mm2 die, provides a peak performance of 3.96 GIPS, and has a peak power consumption of 1 W when all processor cores operate at the nominal frequency of 180 MHz. SpiNNaker chips are fully operational and meet their power and performance requirements.
Gossip-based aggregation in large dynamic networks As computer networks increase in size, become more heterogeneous and span greater geographic distances, applications must be designed to cope with the very large scale, poor reliability, and often, with the extreme dynamism of the underlying network. Aggregation is a key functional building block for such applications: it refers to a set of functions that provide components of a distributed system access to global information including network size, average load, average uptime, location and description of hotspots, and so on. Local access to global information is often very useful, if not indispensable for building applications that are robust and adaptive. For example, in an industrial control application, some aggregate value reaching a threshold may trigger the execution of certain actions; a distributed storage system will want to know the total available free space; load-balancing protocols may benefit from knowing the target average load so as to minimize the load they transfer. We propose a gossip-based protocol for computing aggregate values over network components in a fully decentralized fashion. The class of aggregate functions we can compute is very broad and includes many useful special cases such as counting, averages, sums, products, and extremal values. The protocol is suitable for extremely large and highly dynamic systems due to its proactive structure---all nodes receive the aggregate value continuously, thus being able to track any changes in the system. The protocol is also extremely lightweight, making it suitable for many distributed applications including peer-to-peer and grid computing systems. We demonstrate the efficiency and robustness of our gossip-based protocol both theoretically and experimentally under a variety of scenarios including node and communication failures.
Rigorous analysis of delta-sigma modulators for fractional-N PLL frequency synthesis In this paper, rigorous analyses are presented for higher order multistage noise shaping (MASH) Delta-Sigma (/spl Delta//spl Sigma/) modulators, which are built out of cascaded first-order stages, with rational DC inputs and nonzero initial conditions. Asymptotic statistics such as the mean, average power, and autocorrelation of the binary quantizer error are formulated using a nonlinear differenc...
Software defined radios for small satellites Clusters, constellations, formations, or `swarms' of small satellites are fast becoming a way to perform scientific and technological missions more affordably. As objectives of these missions become more ambitious, there are still problems in increasing the number of communication windows, supporting multiple signals, and increasing data rates over reliable intersatellite and ground links to Earth. Also, there is a shortage of available frequencies in the 2 m and 70 cm bands due to rapid increase in the number of CubeSats orbiting the Earth - leading to further regulatory issues. Existing communication systems and radio signal processing Intellectual Property (IP) cores cannot fully address these challenges. One of the possible strategies to solve these issues is by equipping satellites with a Software Defined Radio (SDR). SDR is a key area to realise various software implementations which enable an adaptive and reconfigurable communication system without changing any hardware device or feature. This paper proposes a new SDR architecture which utilises a combination of Field Programmable Gate Array (FPGA) and field programmable Radio Frequency (RF) transceiver to solve back-end and front- end challenges and thereby enabling reception of multiple signals or satellites using single user equipment.
A VCO-Based Nonuniform Sampling ADC Using a Slope-Dependent Pulse Generator This paper presents a voltage-controlled oscillator (VCO)-based nonuniform sampling analog-to-digital converter (ADC) as an alternative to the level-crossing (LC)-based converters for digitizing biopotential signals. This work aims to provide a good signal-to-noise-and-distortion ratio at a low average sampling rate. In the proposed conversion method, a slope-dependent pulse generation block is used to provide a variable sample rate adjusted according to the input signal's slope. Simulation results show that the introduced method meets a target reconstruction quality with a sampling rate approaching 92 Sps, while on the same MIT-BIH Arrhythmia N 106 ECG benchmark, the classic LC-based approach requires a sampling rate higher than 500 Sps. The benefits of the proposed method are more remarkable when the input signal is very noisy. The proposed ADC achieves a compression ratio close to 4, but with only 5.4% root-mean-square difference when tested using the MIT-BIH Arrhythmia Database.
1.033661
0.033333
0.033333
0.033333
0.033333
0.033333
0.022222
0.008527
0.000094
0.000004
0
0
0
0
FastRemap: a tool for quickly remapping reads between genome assemblies Motivation: A genome read dataset can be quickly and efficiently remapped from one reference to another similar reference (e.g., between two reference versions or two similar species) using a variety of tools, e.g., the commonly used CrossMap tool. With the explosion of available genomic datasets and references, high-performance remapping tools will be even more important for keeping up with the computational demands of genome assembly and analysis. Results: We provide FastRemap, a fast and efficient tool for remapping reads between genome assemblies. FastRemap provides up to a 7.82x speedup (6.47x, on average) and uses as low as 61.7% (80.7%, on average) of the peak memory consumption compared to the state-of-the-art remapping tool, CrossMap.
SWIFOLD: Smith-Waterman implementation on FPGA with OpenCL for long DNA sequences. The results suggest that SWIFOLD can be a serious contender for accelerating the SW alignment of DNA sequences of unrestricted size in an affordable way reaching on average 125 GCUPS and almost a peak of 270 GCUPS.
GSWABE: faster GPU-accelerated sequence alignment with optimal alignment retrieval for short DNA sequences In this paper, we present GSWABE, a graphics processing unit GPU-accelerated pairwise sequence alignment algorithm for a collection of short DNA sequences. This algorithm supports all-to-all pairwise global, semi-global and local alignment, and retrieves optimal alignments on Compute Unified Device Architecture CUDA-enabled GPUs. All of the three alignment types are based on dynamic programming and share almost the same computational pattern. Thus, we have investigated a general tile-based approach to facilitating fast alignment by deeply exploring the powerful compute capability of CUDA-enabled GPUs. The performance of GSWABE has been evaluated on a Kepler-based Tesla K40 GPU using a variety of short DNA sequence datasets. The results show that our algorithm can yield a performance of up to 59.1 billions cell updates per second GCUPS, 58.5 GCUPS and 50.3 GCUPS for global, semi-global and local alignment, respectively. Furthermore, on the same system GSWABE runs up to 156.0 times faster than the Streaming SIMD Extensions SSE-based SSW library and up to 102.4 times faster than the CUDA-based MSA-CUDA the first stage in terms of local alignment. Compared with the CUDA-based gpu-pairAlign, GSWABE demonstrates stable and consistent speedups with a maximum speedup of 11.2, 10.7, and 10.6 for global, semi-global, and local alignment, respectively. Copyright © 2014 John Wiley & Sons, Ltd.
Emerging Trends in Design and Applications of Memory-Based Computing and Content-Addressable Memories Content-addressable memory (CAM) and associative memory (AM) are types of storage structures that allow searching by content as opposed to searching by address. Such memory structures are used in diverse applications ranging from branch prediction in a processor to complex pattern recognition. In this paper, we review the emerging challenges and opportunities in implementing different varieties of...
FindeR: Accelerating FM-Index-Based Exact Pattern Matching in Genomic Sequences through ReRAM Technology Genomics is the critical key to enabling precision medicine, ensuring global food security and enforcing wildlife conservation. The massive genomic data produced by various genome sequencing technologies presents a significant challenge for genome analysis. Because of errors from sequencing machines and genetic variations, approximate pattern matching (APM) is a must for practical genome analysis. Recent work proposes FPGA, ASIC and even process-in-memory-based accelerators to boost the APM throughput by accelerating dynamic-programming-based algorithms (e.g., Smith-Waterman). However, existing accelerators lack the efficient hardware acceleration for the exact pattern matching (EPM) that is an even more critical and essential function widely used in almost every step of genome analysis including assembly, alignment, annotation and compression. State-of-the-art genome analysis adopts the FM-Index that augments the space-efficient BWT with additional data structures permitting fast EPM operations. But the FM-Index is notorious for poor spatial locality and massive random memory accesses. In this paper, we propose a ReRAM-based process-in-memory architecture, FindeR, to enhance the FM-Index EPM search throughput in genomic sequences. We build a reliable and energy-efficient Hamming distance unit to accelerate the computing kernel of FM-Index search using commodity ReRAM chips without introducing extra CMOS logic. We further architect a full-fledged FM-Index search pipeline and improve its search throughput by lightweight scheduling on the NVDIMM. We also create a system library for programmers to invoke FindeR to perform EPMs in genome analysis. Compared to state-of-the-art accelerators, FindeR improves the FM-Index search throughput by 83% ~ 30K× and throughput per Watt by 3.5×~42.5K×.
GateKeeper-GPU: Fast and Accurate Pre-Alignment Filtering in Short Read Mapping We introduce GateKeeper-GPU, a fast and accurate pre-alignment filter that efficiently reduces the need for expensive sequence alignment. GateKeeper-GPU improves the filtering accuracy of GateKeeper, and by exploiting the massive parallelism provided by GPU threads it concurrently examines numerous sequence pairs rapidly. GateKeeper-GPU is available at https://github.com/BilkentCompGen/GateKeeper-...
An FPGA Implementation of A Portable DNA Sequencing Device Based on RISC-V Miniature and mobile DNA sequencers are steadily growing in popularity as effective tools for genetics research. As basecalling algorithms continue to evolve, basecalling poses a serious challenge for small computing devices despite its increasing accuracy. Although general-purpose computing chips such as CPUs and GPUs can achieve fast results, they are not energy efficient enough for mobile applications. This paper presents an innovative solution, a basecalling hardware architecture based on RISC-V ISA, and after validation with our custom FPGA verification platform, it demonstrates a 1.95x energy efficiency ratio compared to x86. There is also a 38% improvement in energy efficiency ratio compared to ARM. In addition, this study also completes the verification work for subsequent ASIC designs.
Accelerating read mapping with FastHASH. With the introduction of next-generation sequencing (NGS) technologies, we are facing an exponential increase in the amount of genomic sequence data. The success of all medical and genetic applications of next-generation sequencing critically depends on the existence of computational techniques that can process and analyze the enormous amount of sequence data quickly and accurately. Unfortunately, the current read mapping algorithms have difficulties in coping with the massive amounts of data generated by NGS.We propose a new algorithm, FastHASH, which drastically improves the performance of the seed-and-extend type hash table based read mapping algorithms, while maintaining the high sensitivity and comprehensiveness of such methods. FastHASH is a generic algorithm compatible with all seed-and-extend class read mapping algorithms. It introduces two main techniques, namely Adjacency Filtering, and Cheap K-mer Selection.We implemented FastHASH and merged it into the codebase of the popular read mapping program, mrFAST. Depending on the edit distance cutoffs, we observed up to 19-fold speedup while still maintaining 100% sensitivity and high comprehensiveness.
A Linear Representation of Dynamics of Boolean Networks A new matrix product, called semi-tensor product of matrices, is reviewed. Using it, a matrix expression of logic is proposed, where a logical variable is expressed as a vector, a logical function is expressed as a multiple linear mapping. Under this framework, a Boolean network equation is converted into an equivalent algebraic form as a conventional discrete-time linear system. Analyzing the transition matrix of the linear system, formulas are obtained to show a) the number of fixed points; b) the numbers of cycles of different lengths; c) transient period, for all points to enter the set of attractors; and d) basin of each attractor. The corresponding algorithms are developed and used to some examples.
The Transitive Reduction of a Directed Graph
A new concept for wireless reconfigurable receivers In this article we present the Self-Adaptive Universal Receiver (SAUR), a novel wireless reconfigurable receiver architecture. This scheme is based on blind recognition of the system in use, operating on a new radio interface comprising two functional phases. The first phase performs a wideband analysis (WBA) on the received signal to determine its standard. The second phase corresponds to demodulation. Here we only focus on the WBA phase, which consists of an iterative process to find the bandwidth compatible with the associated signal processing techniques. The blind standard recognition performed in the last iteration step of this process uses radial basis function neural networks. This allows a strong analogy between our approach and conventional pattern recognition problems. The efficiency of this type of blind recognition is illustrated with the results of extensive simulations performed in our laboratory using true data of received signals.
Fpga Implementation Of High-Frequency Software Radio Receiver State-of-the-art analog-to-digital converters allow the design of high-frequency software radio receivers that use baseband signal processing. However, such receivers are rarely considered in literature. In this paper, we describe the design of a high-performance receiver operating at high frequencies, whose digital part is entirely implemented in an FPGA device. The design of digital subsystem is given, together with the design of a low-cost analog front end.
A Hybrid Dynamic Load Balancing Algorithm For Distributed Systems Using Genetic Algorithms Dynamic Load Balancing (DLB) is sine qua non in modern distributed systems to ensure the efficient utilization of computing resources therein. This paper proposes a novel framework for hybrid dynamic load balancing. Its framework uses a Genetic Algorithms (GA) based supernode selection approach within. The GA-based approach is useful in choosing optimally loaded nodes as the supernodes directly from data set, thereby essentially improving the speed of load balancing process. Applying the proposed GA-based approach, this work analyzes the performance of hybrid DLB algorithm under different system states such as lightly loaded, moderately loaded, and highly loaded. The performance is measured with respect to three parameters: average response time, average round trip time, and average completion time of the users. Further, it also evaluates the performance of hybrid algorithm utilizing OnLine Transaction Processing (OLTP) benchmark and Sparse Matrix Vector Multiplication (SPMV) benchmark applications to analyze its adaptability to I/O-intensive, memory-intensive, or/and CPU-intensive applications. The experimental results show that the hybrid algorithm significantly improves the performance under different system states and under a wide range of workloads compared to traditional decentralized algorithm.
OMNI: A Framework for Integrating Hardware and Software Optimizations for Sparse CNNs Convolution neural networks (CNNs) as one of today’s main flavor of deep learning techniques dominate in various image recognition tasks. As the model size of modern CNNs continues to grow, neural network compression techniques have been proposed to prune the redundant neurons and synapses. However, prior techniques disconnect the software neural networks compression and hardware acceleration, whi...
1.2
0.2
0.2
0.2
0.2
0.2
0.2
0.05
0
0
0
0
0
0
An Authenticated Encryption Based Security Framework for NoC Architectures Network on Chip (NoC) is an emerging solution to the existing scalability problems with SoC. However it is exposed to security threats like extraction of secret information from IP cores. In this paper we present an Authenticated Encryption (AE) based security framework for NoC based systems. The security framework resides in Network Interface (NI) of every secure IP core allowing secure communication among such IP cores. We simulated and implemented our framework using Verilog/VHDL modules on top of NoCem emulator. The results showed tolerable area overhead and did not affect the network performance apart from some initial latency.
Energy Efficient Run-Time Incremental Mapping for 3-D Networks-on-Chip 3-D Networks-on-Chip(NoC) emerge as a potent solution to address both the interconnection and design complexity problems facing future Multiprocessor System-on-Chips(MPSoCs).Effective run-time mapping on such 3-D NoC-based MPSoCs can be quite challenging,as the arrival order and task graphs of the target applications are typically not known a priori,which can be further complicated by stringent energy requirements for NoC systems.This paper thus presents an energy-aware run-time incremental mapping algorithm(ERIM) for 3-D NoC which can minimize the energy consumption due to the data communications among processor cores,while reducing the fragmentation effect on the incoming applications to be mapped,and simultaneously satisfying the thermal constraints imposed on each incoming application.Specifically,incoming applications are mapped to cuboid tile regions for lower energy consumption of communication and the minimal routing.Fragment tiles due to system fragmentation can be gleaned for better resource utilization.Extensive experiments have been conducted to evaluate the performance of the proposed algorithm ERIM,and the results are compared against the optimal mapping algorithm(branch-and-bound) and two heuristic algorithms(TB and TL).The experiments show that ERIM outperforms TB and TL methods with significant energy saving(more than 10%),much reduced average response time,and improved system utilization.
A Security Framework for NoC Using Authenticated Encryption and Session Keys Abstract Network on Chip (NoC) is an emerging solution to the existing scalability problems with System on Chip (SoC). However, it is exposed to security threats like extraction of secret information from IP cores. In this paper we present an Authenticated Encryption (AE)-based security framework for NoC based systems. The security framework resides in Network Interface (NI) of every IP core allowing secure communication among such IP cores. The secure cores can communicate using permanent keys whereas temporary session keys are used for communication between secure and non-secure cores. A traffic limiting counter is used to prevent bandwidth denial and access rights table avoids unauthorized memory accesses. We simulated and implemented our framework using Verilog/VHDL modules on top of NoCem emulator. The results showed tolerable area overhead and did not affect the network performance apart from some initial latency.
Secure Model Checkers for Network-on-Chip (NoC) Architectures. As chip multiprocessors (CMPs) are becoming more susceptible to process variation, crosstalk, and hard and soft errors, emerging threats from rogue employees in a compromised foundry are creating new vulnerabilities that could undermine the integrity of our chips with malicious alterations. As the Network-on-Chip (NoC) is a focal point of sensitive data transfer and critical device coordination, there is an urgent demand for secure and reliable communication. In this paper we propose Secure Model Checkers (SMCs), a real-time solution for control logic verification and functional correctness in the micro-architecture to detect Hardware Trojan (HT) induced denial-of-service attacks and improve reliability. In our evaluation, we show that SMCs provides significant security enhancements in real-time with only 1.5% power and 1.1% area overhead penalty in the micro-architecture.
Information Hiding behind Approximate Computation There are many interesting advances in approximate computing recently targeting the energy efficiency in system design and execution. The basic idea is to trade computation accuracy for power and energy during all phases of the computation, from data to algorithm and hardware implementation. In this paper, we explore how to utilize approximate computing for security based information hiding. More specifically, we will demonstrate with examples the potential of embedding information in approximate hardware and approximate data, as well as during approximate computation. We analyze both the security vulnerabilities that this may cause and the potential security applications enabled by such information hiding. We argue that information could be hidden behind approximate computation without compromising the computation accuracy or energy efficiency.
Chiplet-Package Co-Design For 2.5D Systems Using Standard ASIC CAD Tools Chiplet integration using 2.5D packaging is gaining popularity nowadays which enables several interesting features like heterogeneous integration and drop-in design method. In the traditional die-by-die approach of designing a 2.5D system, each chiplet is designed independently without any knowledge of the package RDLs. In this paper, we propose a Chip-Package Co-Design flow for implementing 2.5D systems using existing commercial chip design tools. Our flow encompasses 2.5D-aware partitioning suitable for SoC design, Chip-Package Floorplanning, and post-design analysis and verification of the entire 2.5D system. We also designed our own package planners to route RDL layers on top of chiplet layers. We use an ARM Cortex-M0 SoC system to illustrate our flow and compare analysis results with a monolithic 2D implementation of the same system. We also compare two different 2.5D implementations of the same SoC system following the drop-in approach. Alongside the traditional die-by-die approach, our holistic flow enables design efficiency and flexibility with accurate cross-boundary parasitic extraction and design verification.
An exploration of L2 cache covert channels in virtualized environments Recent exploration into the unique security challenges of cloud computing have shown that when virtual machines belonging to different customers share the same physical machine, new forms of cross-VM covert channel communication arise. In this paper, we explore one of these threats, L2 cache covert channels, and demonstrate the limits of these this threat by providing a quantification of the channel bit rates and an assessment of its ability to do harm. Through progressively refining models of cross-VM covert channels from the derived maximums, to implementable channels in the lab, and finally in Amazon EC2 itself we show how a variety of factors impact our ability to create effective channels. While we demonstrate a covert channel with considerably higher bit rate than previously reported, we assess that even at such improved rates, the harm of data exfiltration from these channels is still limited to the sharing of small, if important, secrets such as private keys.
A Bayesian Method for the Induction of Probabilistic Networks from Data This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction of probabilistic expert systems. We extend the basic method to handle missing data and hidden (latent) variables. We show how to perform probabilistic inference by averaging over the inferences of multiple belief networks. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. Finally, we relate the methods in this paper to previous work, and we discuss open problems.
A Case for Intelligent RAM Two trends call into question the current practice of microprocessors and DRAMs being fabricated as different chips on different fab lines: 1) the gap between processor and DRAM speed is growing at 50% per year; and 2) the size and organization of memory on a single DRAM chip is becoming awkward to use in a system, yet size is growing at 60% per year. Intelligent RAM, or IRAM, merges processing and memory into a single chip to lower memory latency, increase memory bandwidth, and improve energy efficiency as well as to allow more flexible selection of memory size and organization. In addition, IRAM promises savings in power and board area. We review the state of microprocessors and DRAMs today, explore some of the opportunities and challenges for IRAMs, and finally estimate performance and energy efficiency of three IRAM designs.
On the evolution of user interaction in Facebook Online social networks have become extremely popular; numerous sites allow users to interact and share content using social links. Users of these networks often establish hundreds to even thousands of social links with other users. Recently, researchers have suggested examining the activity network - a network that is based on the actual interaction between users, rather than mere friendship - to distinguish between strong and weak links. While initial studies have led to insights on how an activity network is structurally different from the social network itself, a natural and important aspect of the activity network has been disregarded: the fact that over time social links can grow stronger or weaker. In this paper, we study the evolution of activity between users in the Facebook social network to capture this notion. We find that links in the activity network tend to come and go rapidly over time, and the strength of ties exhibits a general decreasing trend of activity as the social network link ages. For example, only 30% of Facebook user pairs interact consistently from one month to the next. Interestingly, we also find that even though the links of the activity network change rapidly over time, many graph-theoretic properties of the activity network remain unchanged.
A Digital Requantizer With Shaped Requantization Noise That Remains Well Behaved After Nonlinear Distortion A major problem in oversampling digital-to-analog converters and fractional-N frequency synthesizers, which are ubiquitous in modern communication systems, is that the noise they introduce contains spurious tones. The spurious tones are the result of digitally generated, quantized signals passing through nonlinear analog components. This paper presents a new method of digital requantization called successive requantization, special cases of which avoids the spurious tone generation problem. Sufficient conditions are derived that ensure certain statistical properties of the quantization noise, including the absence of spurious tones after nonlinear distortion. A practical example is presented and shown to satisfy these conditions.
Fully Integrated CMOS Power Amplifier With Efficiency Enhancement at Power Back-Off This paper presents a new approach for power amplifier design using deep submicron CMOS technologies. A transformer based voltage combiner is proposed to combine power generated from several low-voltage CMOS amplifiers. Unlike other voltage combining transformers, the architecture presented in this paper provides greater flexibility to access and control the individual amplifiers in a voltage comb...
A 15.5 dB, wide signal swing, dynamic amplifier using a common-mode voltage detection technique This paper presents a high-speed, low-power and wide signal swing differential dynamic amplifier using a common-mode voltage detection technique. The proposed dynamic amplifier achieves a 15.5 dB gain with less than 1 dB drop over a signal swing of 1.3 Vpp at an operating frequency of 1.5 GHz with a VDD of 1.2 V in 90 nm CMOS. The power consumption of the proposed circuit can be reduced linearly with operating frequency lowering.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.11
0.1
0.1
0.1
0.1
0.05
0.008231
0
0
0
0
0
0
0
On the Probability of Unsafe Disagreement in Group Formation Algorithms for Vehicular Ad Hoc Networks We address the problem of group formation inautomotive cooperative applications using wireless vehicle-to-vehiclecommunication. Group formation (GF) is an essential stepin bootstrapping self-organizing distributed applications such asvirtual traffic lights. We propose a synchronous GF algorithm andinvestigate its behaviour in the presence of an unbounded numberof asymmetric communication failures (receive omissions). Giventhat GF is an agreement problem, we know from previousresearch that it is impossible to design a GF algorithm thatcan guarantee agreement on the group membership in thepresence of an unbounded number of messages losses. Thus, under this assumption, disagreement is an unavoidable outcomeof a GF algorithm. We consider two types of disagreement(failure modes): safe and unsafe disagreement. To reduce theprobability of unsafe disagreement, our algorithm uses a localoracle to estimate the number of nodes that are attempting toparticipate in the GF process. (Such estimates can be provided byroadside sensors or local sensors in a vehicle such as cameras.)For the proposed algorithm, we show how the probability ofunsafe and safe disagreement varies for different system settingsas a function of the probability of message loss. We also showhow these probabilities vary depending on the correctness ofthe local oracles. More specifically, our results show that unsafedisagreement occurs only if the local oracles underestimates thenumber of participating nodes.
Design and Analysis of a Leader Election Algorithm for Mobile Ad Hoc Networks Leader election is a very important problem, not only in wired networks, but in mobile, ad hoc networks as well. Existing solutions to leader election do not handle frequent topology changes and dynamic nature of mobile networks. In this paper, we present a leader election algorithm that is highly adaptive to arbitrary (possibly concurrent) topological changes and is therefore well-suited for use in mobile ad hoc networks. The algorithm is based on finding an extrema and uses diffusing computations for this purpose. We show, using linear-time temporal logic, that the algorithm is "weakly" self-stabilizing and terminating. We also simulate the algorithm in a mobile ad hoc setting. Through our simulation study, we elaborate on several important issues that can significantly impact performance of such a protocol for mobile ad hoc networks such as choice of signaling, broadcast nature of wireless medium etc. Our simulation study shows that our algorithm is quite effective in that each node has a leader approximately 97-99% of the time in a variety of operating conditions.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The part-time parliament Recent archaeological discoveries on the island of Paxos reveal that the parliament functioned despite the peripatetic propensity of its part-time legislators. The legislators maintained consistent copies of the parliamentary record, despite their frequent forays from the chamber and the forgetfulness of their messengers. The Paxon parliament's protocol provides a new way of implementing the state machine approach to the design of distributed systems.
Chord: a scalable peer-to-peer lookup protocol for internet applications A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.
Tapestry: a resilient global-scale overlay for service deployment We present Tapestry, a peer-to-peer overlay routing infrastructure offering efficient, scalable, location-independent routing of messages directly to nearby copies of an object or service using only localized resources. Tapestry supports a generic decentralized object location and routing applications programming interface using a self-repairing, soft-state-based routing layer. The paper presents the Tapestry architecture, algorithms, and implementation. It explores the behavior of a Tapestry deployment on PlanetLab, a global testbed of approximately 100 machines. Experimental results show that Tapestry exhibits stable behavior and performance as an overlay, despite the instability of the underlying network layers. Several widely distributed applications have been implemented on Tapestry, illustrating its utility as a deployment infrastructure.
The GPU Computing Era GPU computing is at a tipping point, becoming more widely used in demanding consumer applications and high-performance computing. This article describes the rapid evolution of GPU architectures—from graphics processors to massively parallel many-core multiprocessors, recent developments in GPU computing architectures, and how the enthusiastic adoption of CPU+GPU coprocessing is accelerating parallel applications.
Improved delay-dependent stability criteria for time-delay systems This note provides an improved asymptotic stability condi- tion for time-delay systems in terms of a strict linear matrix inequality. Unlike previous methods, the mathematical development avoids bounding certain cross terms which often leads to conservatism. When time-varying norm-bounded uncertainties appear in a delay system, an improved robust delay-dependent stability condition is also given. Examples are provided to demonstrate the reduced conservatism of the proposed conditions. Index Terms—Delay-dependent condition, linear matrix inequality (LMI), time-delay systems, uncertain systems.
Friends and neighbors on the Web The Internet has become a rich and large repository of information about us as individuals. Anything from the links and text on a user’s homepage to the mailing lists the user subscribes to are reflections of social interactions a user has in the real world. In this paper we devise techniques and tools to mine this information in order to extract social networks and the exogenous factors underlying the networks’ structure. In an analysis of two data sets, from Stanford University and the Massachusetts Institute of Technology (MIT), we show that some factors are better indicators of social connections than others, and that these indicators vary between user populations. Our techniques provide potential applications in automatically inferring real world connections and discovering, labeling, and characterizing communities.
On the time-complexity of broadcast in multi-hop radio networks: an exponential gap between determinism and randomization The time-complexity of deterministic and randomized protocols for achieving broadcast (distributing a message from a source to all other nodes) in arbitrary multi-hop radio networks is investigated. In many such networks, communication takes place in synchronous time-slots. A processor receives a message at a certain time-slot if exactly one of its neighbors transmits at that time-slot. We assume no collision-detection mechanism; i.e., it is not always possible to distinguish the case where no neighbor transmits from the case where several neighbors transmit simultaneously. We present a randomized protocol that achieves broadcast in time which is optimal up to a logarithmic factor. In particular, with probability 1 --E, the protocol achieves broadcast within O((D + log n/s) 'log n) time-slots, where n is the number of processors in the network and D its diameter. On the other hand, we prove a linear lower bound on the deterministic time-complexity of broadcast in this model. Namely, we show that any deterministic broadcast protocol requires 8(n) time-slots, even if the network has diameter 3, and n is known to all processors. These two results demonstrate an exponential gap in complexity between randomization and determinism.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.2
0.002062
0
0
0
0
0
0
0
0
0
0
0
0
A 3-10 fJ/conv-step Error-Shaping Alias-Free Continuous-Time ADC. Continuous-time data conversion and continuous-time DSP are an interesting alternative to conventional methods of signal conversion and processing. This alternative does not suffer from aliasing, shows superior spectral properties (e.g., no quantization noise floor), and enables event-driven flexible signal processing capabilities using digital circuits, thus scaling well with technology. However,...
A Nonuniform Sampling ADC Architecture With Reconfigurable Digital Anti-Aliasing Filter. This work proposes a nonuniform sampling analog-to-digital converter (ADC) architecture that incorporates a reconfigurable digital anti-aliasing (AA) filter in the asynchronous digital domain. Considering applications where the signal frequency, bandwidth, or activity may significantly vary over time and operating conditions, it provides high flexibility, relaxes analog AA filter requirements, ada...
A Memristor-Based Continuous-Time Digital FIR Filter for Biomedical Signal Processing This paper proposes a new timing storage circuit based on memristors. Its ability to store and reproduce timing information in an analog manner without performing quantization can be useful for a wide range of applications. For continuous-time (CT) digital filters, the power and area costly analog delay blocks, which are usually implemented as inverter chains or their variants, can be replaced by the proposed timing storage circuits to delay CT digital signals in a more efficient way, especially for low-frequency biomedical applications that require very long tap delays. In addition, the same timing storage circuits also enable the storage of CT digital signals, extending the benefits of CT digital signal processing (DSP) to applications that require signal storage. As an example, a 15-tap CT finite impulse response (FIR) Savitzky-Golay (S-G) filter was designed with memristor-based delay blocks to smoothen electrocardiographic (ECG) signals accompanied with high-frequency noise. The simulated power consumption under a 3.3-volt supply was 6.63 .
An ECG recording front-end with continuous-time level-crossing sampling. An ECG recording front-end with a continuous- time asynchronous level-crossing analog-to-digital converter (LC-ADC) is proposed. The system is a voltage and current mixed-mode system, which comprises a low noise amplifier (LNA), a programmable voltage-to-current converter (PVCC) as a programmable gain amplifier (PGA) and an LC-ADC with calibration DACs and an RC oscillator. The LNA shows an input referred noise of 3.77 μVrms over 0.06 Hz-950 Hz bandwidth. The total harmonic distortion (THD) of the LNA is 0.15% for a 10 mVPP input. The ECG front-end consumes 8.49 μW from a 1 V supply and achieves an ENOB up to 8 bits. The core area of the proposed front-end is 690 ×710 μm2, fabricated in a 0.18 μm CMOS technology.
Empowering Things with Intelligence: A Survey of the Progress, Challenges, and Opportunities in Artificial Intelligence of Things In the Internet-of-Things (IoT) era, billions of sensors and devices collect and process data from the environment, transmit them to cloud centers, and receive feedback via the Internet for connectivity and perception. However, transmitting massive amounts of heterogeneous data, perceiving complex environments from these data, and then making smart decisions in a timely manner are difficult. Artif...
A VCO Based Highly Digital Temperature Sensor With 0.034 °C/mV Supply Sensitivity. A self-referenced VCO-based temperature sensor with reduced supply sensitivity is presented. The proposed sensor converts temperature information to frequency and then into digital bits. A novel sensing technique is proposed in which temperature information is acquired by evaluating the ratio of the output frequencies of two ring oscillators, designed to have different temperature sensitivities, t...
A 42 fJ/Step-FoM Two-Step VCO-Based Delta-Sigma ADC in 40 nm CMOS A 40 MHz-BW 10 bit two-step VCO-based Delta-Sigma ADC is presented. With the open-loop structure and highly digital building blocks, a robust performance, high bandwidth and high power efficiency are achieved. The nonlinearities of the coarse and the fine VCO-based quantizers are mitigated by distortion cancellation and voltage swing reduction schemes respectively. Because of the intrinsic DEM of the VCO-based quantizer output, the matching requirement of the DAC cells is greatly relaxed. The experimental results in 40 nm CMOS show that, with 1.6 GHz sampling frequency, the proposed ADC reaches 59.5 dB SNDR and 67.7 dB SFDR for 40 MHz bandwidth. The power consumption is only 2.57 mW under 0.9 V power supply, corresponding the best FoM (42 fJ/step) among high bandwidth ( 20 MHz) DS ADCs.
A 174.3-dB FoM VCO-Based CT ΔΣ Modulator With a Fully-Digital Phase Extended Quantizer and Tri-Level Resistor DAC in 130-nm CMOS. This paper presents a high dynamic range (DR) power-efficient voltage-controlled oscillator (VCO)-based continuous-time ΔΣ modulator. It introduces a robust and low-power fully-digital phase extended quantizer that doubles the VCO quantizer resolution compared to a conventional XOR-based phase detector. A tri-level resistor digital-to-analog converter is also introduced as complementary to the new...
Tapestry: a resilient global-scale overlay for service deployment We present Tapestry, a peer-to-peer overlay routing infrastructure offering efficient, scalable, location-independent routing of messages directly to nearby copies of an object or service using only localized resources. Tapestry supports a generic decentralized object location and routing applications programming interface using a self-repairing, soft-state-based routing layer. The paper presents the Tapestry architecture, algorithms, and implementation. It explores the behavior of a Tapestry deployment on PlanetLab, a global testbed of approximately 100 machines. Experimental results show that Tapestry exhibits stable behavior and performance as an overlay, despite the instability of the underlying network layers. Several widely distributed applications have been implemented on Tapestry, illustrating its utility as a deployment infrastructure.
Principles of Distributed Systems, 13th International Conference, OPODIS 2009, Nîmes, France, December 15-18, 2009. Proceedings
Second-order intermodulation mechanisms in CMOS downconverters An in-depth analysis of the mechanisms responsible for second-order intermodulation distortion in CMOS active downconverters is proposed in this paper. The achievable second-order input intercept point (IIP2) has a fundamental limit due to nonlinearity and mismatches in the switching stage and improves with technology scaling. Second-order intermodulation products generated by the input transcondu...
A 14 bit 200 MS/s DAC With SFDR > 78 dBc, IM3 < - 83 dBc and NSD < - 163 dBm/Hz Across the Whole Nyquist Band Enabled by Dynamic-Mismatch Mapping. This paper presents a 14 bit 200 MS/s current-steering DAC with a novel digital calibration technique called dynamic-mismatch mapping (DMM). By optimizing the switching sequence of current cells to reduce the dynamic integral nonlinearity in an I-Q domain, the DMM technique digitally calibrates all mismatch errors so that both the DAC static and dynamic performance can be significantly improved in...
Design Of Q-Enhanced Class-C Vco With Robust Start-Up And High Oscillation Stability A novel topology of Q (quality factor)-enhanced dynamically self-biasing Class-C VCO is proposed in this article. It introduces a bridging capacitor to enhance the quality factor of the oscillator. The enhancement of the quality factor suppresses the squegging phenomenon and the harmonic distortion, and thus improves the phase noise and oscillation stability. The prototype of the proposed circuit was fabricated in SMIC 0.18 mu m CMOS process and the measurement results showed a low phase noise of -125 dBc/Hz@ 1MHz from 3.331 GHz carrier with a total power consumption of 3.36mW from a 1.2V supply. The proposed work exhibited an excellent Figure of Merit (FoM) of -190 dBc/Hz.
A 178.9-dB FoM 128-dB SFDR VCO-Based AFE for ExG Readouts With a Calibration-Free Differential Pulse Code Modulation Technique This article presents a voltage-controlled oscillator (VCO)-based analog front end (AFE) for ExG readout applications with both a wide dynamic range (DR) and high linearity. By using a differential pulse code modulation (DPCM) technique, VCO non-linearity is mitigated by operating the VCO in the small-signal linear regime. To minimize power consumption from the power-hungry gain error calibration,...
1.072222
0.068333
0.066667
0.066667
0.066667
0.046667
0.019556
0.006667
0
0
0
0
0
0
Providing Computing Services through Mobile Devices in a Collaborative Way - A Fog Computing Case Study. The increasing number of mobile devices, such as smartphones, tablets and laptops, and also advances in their computing power enabled them to be considered as computing resources, having their proximity explored. The use of nearby resources for computing is growing year by year and it is called Fog Computing. The elements on the edge of the Internet are exploited once the computer service providers could be unavailable or overloaded. This work focuses on using mobile devices to provide computing services by using an heuristic called Adapted Maximum Regret, which tries to minimize energy and avoid unreliable devices. There is also a top-level meta-heuristic which has global information and interconnects different clusters of devices on the edge of the Internet to guarantee QoS. We conducted a set of experiments that demonstrated we should avoid devices with a high degree of failures to save more energy when allocating tasks as well as to decrease the applications response time and communication through adjustments in the selection algorithm of external agglomerates.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Efficient Design of Spiking Neural Network With STDP Learning Based on Fast CORDIC In emerging Spiking Neural Network (SNN) based neuromorphic hardware design, energy efficiency and on-line learning are attractive advantages mainly contributed by bio-inspired local learning with nonlinear dynamics and at the cost of associated hardware complexity. This paper presents a novel SNN design employing fast COordinate Rotation DIgital Computer (CORDIC) algorithm to achieve fast spike t...
From few to many: illumination cone models for face recognition under variable lighting and pose We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render驴or synthesize驴images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone (based on Euclidean distance within the image space). We test our face recognition method on 4,050 images from the Yale Face Database B; these images contain 405 viewing conditions (9 poses $\times$ 45 illumination conditions) for 10 individuals. The method performs almost without error, except on the most extreme lighting directions, and significantly outperforms popular recognition methods that do not use a generative model.
Energy efficient parallel neuromorphic architectures with approximate arithmetic on FPGA. In this paper, we present the parallel neuromorphic processor architectures for spiking neural networks on FPGA. The proposed architectures address several critical issues pertaining to efficient parallelization of the update of membrane potentials, on-chip storage of synaptic weights and integration of approximate arithmetic units. The trade-offs between throughput, hardware cost and power overheads for different configurations are thoroughly investigated. Notably, for the application of handwritten digit recognition, a promising training speedup of 13.5x and a recognition speedup of 25.8x are achieved by a parallel implementation whose degree of parallelism is 32. In spite of the 120MHz operating frequency, the 32-way parallel hardware design demonstrates a 59.4x training speedup over the single-thread software program running on a 2.2GHz general purpose CPU. Equally importantly, by leveraging the built-in resilience of the neuromorphic architecture we demonstrate the energy benefit resulted from the use of approximate arithmetic computation. Up to 20% improvement in energy consumption is achieved by integrating approximate multipliers into the system while maintaining almost the same level of recognition rate achieved using standard multipliers. To the best of our knowledge, it is the first time that the approximate computing and parallel processing are applied to FPGA based spiking neural networks. The influence of the parallel processing on the benefits of approximate computing is also discussed in detail.
Scalable Digital Neuromorphic Architecture for Large-Scale Biophysically Meaningful Neural Network With Multi-Compartment Neurons. Multicompartment emulation is an essential step to enhance the biological realism of neuromorphic systems and to further understand the computational power of neurons. In this paper, we present a hardware efficient, scalable, and real-time computing strategy for the implementation of large-scale biologically meaningful neural networks with one million multi-compartment neurons (CMNs). The hardware platform uses four Altera Stratix III field-programmable gate arrays, and both the cellular and the network levels are considered, which provides an efficient implementation of a large-scale spiking neural network with biophysically plausible dynamics. At the cellular level, a cost-efficient multi-CMN model is presented, which can reproduce the detailed neuronal dynamics with representative neuronal morphology. A set of efficient neuromorphic techniques for single-CMN implementation are presented with all the hardware cost of memory and multiplier resources removed and with hardware performance of computational speed enhanced by 56.59% in comparison with the classical digital implementation method. At the network level, a scalable network-on-chip (NoC) architecture is proposed with a novel routing algorithm to enhance the NoC performance including throughput and computational latency, leading to higher computational efficiency and capability in comparison with state-of-the-art projects. The experimental results demonstrate that the proposed work can provide an efficient model and architecture for large-scale biologically meaningful networks, while the hardware synthesis results demonstrate low area utilization and high computational speed that supports the scalability of the approach.
Spike Counts based Low Complexity SNN Architecture with Binary Synapse. In this paper, we present an energy and area efficient spike neural network (SNN) processor based on novel spike counts based methods. For the low cost SNN design, we propose hardware-friendly complexity reduction techniques for both of learning and inferencing modes of operations. First, for the unsupervised learning process, we propose a spike counts based learning method. The novel learning app...
Application of Deep Compression Technique in Spiking Neural Network Chip. In this paper, a reconfigurable and scalable spiking neural network processor, containing 192 neurons and 6144 synapses, is developed. By using deep compression technique in spiking neural network chip, the amount of physical synapses can be reduced to 1/16 of that needed in the original network, while the accuracy is maintained. This compression technique can greatly reduce the number of SRAMs inside the chip as well as the power consumption of the chip. This design achieves throughput per unit area of 1.1 GSOP/( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\text{s}\!\cdot\!\text{mm}^2$</tex-math></inline-formula> ) at 1.2 V, and energy consumed per SOP of 35 pJ. A 2-layer fully-connected spiking neural network is mapped to the chip, and thus the chip is able to realize handwritten digit recognition on MNIST with an accuracy of 91.2%.
Impossibility of distributed consensus with one faulty process The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the “Byzantine Generals” problem.
A fast and elitist multiobjective genetic algorithm: NSGA-II Multi-objective evolutionary algorithms (MOEAs) that use non-dominated sorting and sharing have been criticized mainly for: (1) their O(MN3) computational complexity (where M is the number of objectives and N is the population size); (2) their non-elitism approach; and (3) the need to specify a sharing parameter. In this paper, we suggest a non-dominated sorting-based MOEA, called NSGA-II (Non-dominated Sorting Genetic Algorithm II), which alleviates all of the above three difficulties. Specifically, a fast non-dominated sorting approach with O(MN2) computational complexity is presented. Also, a selection operator is presented that creates a mating pool by combining the parent and offspring populations and selecting the best N solutions (with respect to fitness and spread). Simulation results on difficult test problems show that NSGA-II is able, for most problems, to find a much better spread of solutions and better convergence near the true Pareto-optimal front compared to the Pareto-archived evolution strategy and the strength-Pareto evolutionary algorithm - two other elitist MOEAs that pay special attention to creating a diverse Pareto-optimal front. Moreover, we modify the definition of dominance in order to solve constrained multi-objective problems efficiently. Simulation results of the constrained NSGA-II on a number of test problems, including a five-objective, seven-constraint nonlinear problem, are compared with another constrained multi-objective optimizer, and the much better performance of NSGA-II is observed
Language-based information-flow security Current standard security practices do not provide substantial assurance that the end-to-end behavior of a computing system satisfies important security policies such as confidentiality. An end-to-end confidentiality policy might assert that secret input data cannot be inferred by an attacker through the attacker's observations of system output; this policy regulates information flow. Conventional security mechanisms such as access control and encryption do not directly address the enforcement of information-flow policies. Previously, a promising new approach has been developed: the use of programming-language techniques for specifying and enforcing information-flow policies. In this paper, we survey the past three decades of research on information-flow security, particularly focusing on work that uses static program analysis to enforce information-flow policies. We give a structured view of work in the area and identify some important open challenges.
Fundamental control algorithms in mobile networks In this work we propose simple and efficient protocols forcounting and leader election in mobile networks. For mobilenetworks with fixed base stations we provide a new andvery efficient protocol for counting the number of mobilehosts. The main part of the work concentrates on ad-hocnetworks (no fixed subnetwork). We provide a model forthese networks and leader election (and a special form ofcounting) protocols for both named and anonymous mobilehosts. In this work we define two protocol ...
Bundled execution of recurring traces for energy-efficient general purpose processing Technology scaling has delivered on its promises of increasing device density on a single chip. However, the voltage scaling trend has failed to keep up, introducing tight power constraints on manufactured parts. In such a scenario, there is a need to incorporate energy-efficient processing resources that can enable more computation within the same power budget. Energy efficiency solutions in the past have typically relied on application specific hardware and accelerators. Unfortunately, these approaches do not extend to general purpose applications due to their irregular and diverse code base. Towards this end, we propose BERET, an energy-efficient co-processor that can be configured to benefit a wide range of applications. Our approach identifies recurring instruction sequences as phases of "temporal regularity" in a program's execution, and maps suitable ones to the BERET hardware, a three-stage pipeline with a bundled execution model. This judicious off-loading of program execution to a reduced-complexity hardware demonstrates significant savings on instruction fetch, decode and register file accesses energy. On average, BERET reduces energy consumption by a factor of 3-4X for the program regions selected across a range of general-purpose and media applications. The average energy savings for the entire application run was 35% over a single-issue in-order processor.
A dynamically reconfigurable system-on-a-chip architecture for future mobile digital signal processing The evolving of current and future broadband access techniques into the wireless domain introduces new and flexible network architectures with difficult and interesting challenges. The system designers are faced with a challenging set of problems that stem from access mechanisms, energy conservation, error rate, transmission speed characteristics of the wireless links and mobility aspects. This paper presents first the major challenges in realizing flexible microelectronic system solutions for digital baseband signal processing in future mobile communication applications. Based thereupon, the architecture design of flexible system-on-a-chip solutions is discussed. The focus of the paper is the introduction of a new parallel and dynamically reconfigurable hardware architecture tailored to this application area. Its performance issues and potential are discussed by the implementation of a flexible and computation-intensive component of future mobile terminals.
NDC: Analyzing the impact of 3D-stacked memory+logic devices on MapReduce workloads While Processing-in-Memory has been investigated for decades, it has not been embraced commercially. A number of emerging technologies have renewed interest in this topic. In particular, the emergence of 3D stacking and the imminent release of Micron's Hybrid Memory Cube device have made it more practical to move computation near memory. However, the literature is missing a detailed analysis of a killer application that can leverage a Near Data Computing (NDC) architecture. This paper focuses on in-memory MapReduce workloads that are commercially important and are especially suitable for NDC because of their embarrassing parallelism and largely localized memory accesses. The NDC architecture incorporates several simple processing cores on a separate, non-memory die in a 3D-stacked memory package; these cores can perform Map operations with efficient memory access and without hitting the bandwidth wall. This paper describes and evaluates a number of key elements necessary in realizing efficient NDC operation: (i) low-EPI cores, (ii) long daisy chains of memory devices, (iii) the dynamic activation of cores and SerDes links. Compared to a baseline that is heavily optimized for MapReduce execution, the NDC design yields up to 15X reduction in execution time and 18X reduction in system energy.
A 1V 3.5 μW Bio-AFE With Chopper-Capacitor-Chopper Integrator-Based DSL and Low Power GM-C Filter This brief presents a low-noise, low-power bio-signal acquisition analog front-end (Bio-AFE). It mainly includes a capacitively coupled chopper-stabilized instrumentation amplifier (CCIA), a programmable gain amplifier (PGA), a low-pass filter (LPF), and a successive approximation analog to digital converter (SAR ADC). A chopper-capacitor-chopper integrator based DC servo loop (C3IB-DSL...
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
Combined Feedback–Feedforward Control of Wind Turbines Using State-Constrained Model Predictive Control An application of model predictive control (MPC) to the wind turbine collective pitch and torque control problem in full-load operation is presented. The applied controller is able to include upstream wind speed information, e.g., from light detection and ranging measurements. Furthermore, a method is presented to include constraints on the turbine states in the problem formulation, such as on rotor speed, even in the presence of unmeasured disturbances. In a simulation study, the effects of both preview control and state constraints are evaluated for three different scenarios: normal operation, gusts, and grid-loss load cases. It is found that preview control provides significant benefits in normal operation and under gust conditions. It is further found that including state constraints in the MPC formulation can be used to avoid unnecessary shutdowns due to the violation of the overspeed limit or to control the turbine effectively when the turbine needs to run at a reduced level of generator torque.
Multiobjective evolutionary algorithms: A survey of the state of the art A multiobjective optimization problem involves several conflicting objectives and has a set of Pareto optimal solutions. By evolving a population of solutions, multiobjective evolutionary algorithms (MOEAs) are able to approximate the Pareto optimal set in a single run. MOEAs have attracted a lot of research effort during the last 20 years, and they are still one of the hottest research areas in the field of evolutionary computation. This paper surveys the development of MOEAs primarily during the last eight years. It covers algorithmic frameworks such as decomposition-based MOEAs (MOEA/Ds), memetic MOEAs, coevolutionary MOEAs, selection and offspring reproduction operators, MOEAs with specific search methods, MOEAs for multimodal problems, constraint handling and MOEAs, computationally expensive multiobjective optimization problems (MOPs), dynamic MOPs, noisy MOPs, combinatorial and discrete MOPs, benchmark problems, performance indicators, and applications. In addition, some future research issues are also presented.
Optimal Tracking Control of Motion Systems Tracking control of motion systems typically requires accurate nonlinear friction models, especially at low speeds, and integral action. However, building accurate nonlinear friction models is time consuming, friction characteristics dramatically change over time, and special care must be taken to avoid windup in a controller employing integral action. In this paper a new approach is proposed for the optimal tracking control of motion systems with significant disturbances, parameter variations, and unmodeled dynamics. The ‘desired’ control signal that will keep the nominal system on the desired trajectory is calculated based on the known system dynamics and is utilized in a performance index to design an optimal controller. However, in the presence of disturbances, parameter variations, and unmodeled dynamics, the desired control signal must be adjusted. This is accomplished by using neural network based observers to identify these quantities, and update the control signal on-line. This formulation allows for excellent motion tracking without the need for the addition of an integral state. The system stability is analyzed and Lyapunov based weight update rules are applied to the neural networks to guarantee the boundedness of the tracking error, disturbance estimation error, and neural network weight errors. Experiments are conducted on the linear axes of a mini CNC machine for the contour control of two orthogonal axes, and the results demonstrate the excellent performance of the proposed methodology.
Adaptive tracking control of leader-follower systems with unknown dynamics and partial measurements. In this paper, a decentralized adaptive tracking control is developed for a second-order leader–follower system with unknown dynamics and relative position measurements. Linearly parameterized models are used to describe the unknown dynamics of a self-active leader and all followers. A new distributed system is obtained by using the relative position and velocity measurements as the state variables. By only using the relative position measurements, a dynamic output–feedback tracking control together with decentralized adaptive laws is designed for each follower. At the same time, the stability of the tracking error system and the parameter convergence are analyzed with the help of a common Lyapunov function method. Some simulation results are presented to validate the proposed adaptive tracking control.
Plug-and-Play Decentralized Model Predictive Control for Linear Systems In this technical note, we consider a linear system structured into physically coupled subsystems and propose a decentralized control scheme capable to guarantee asymptotic stability and satisfaction of constraints on system inputs and states. The design procedure is totally decentralized, since the synthesis of a local controller uses only information on a subsystem and its neighbors, i.e. subsystems coupled to it. We show how to automatize the design of local controllers so that it can be carried out in parallel by smart actuators equipped with computational resources and capable to exchange information with neighboring subsystems. In particular, local controllers exploit tube-based Model Predictive Control (MPC) in order to guarantee robustness with respect to physical coupling among subsystems. Finally, an application of the proposed control design procedure to frequency control in power networks is presented.
Event-Based Leader-following Consensus of Multi-Agent Systems with Input Time Delay The event-based control strategy is an effective methodology for tackling the distributed control of multi-agent systems with limited on-board resources. This technical note focuses on event-based leader-following consensus for multi-agent systems described by general linear models and subject to input time delay between controller and actuator. For each agent, the controller updates are event-based and only triggered at its own event times. A necessary condition and two sufficient conditions on leader-following consensus are presented, respectively. It is shown that continuous communication between neighboring agents can be avoided and the Zeno-behavior of triggering time sequences is excluded. A numerical example is presented to illustrate the effectiveness of the obtained theoretical results.
Building Temperature Control Based on Population Dynamics Temperature control in buildings is a dynamic resource allocation problem, which can be approached using nonlinear methods based on population dynamics (i.e., replicator dynamics). A mathematical model of the proposed control technique is shown, including a stability analysis using passivity concepts for an interconnection of a linear multivariable plant driven by a nonlinear control system. In order to illustrate our control strategy, some simulations are performed, and we compare our proposed technique with other control strategies in a model with a fixed structure. Finally, experimental results are shown in order to observe the performance of some of these strategies in a multizone temperature testbed.
Algorithms for chattering reduction in system control Sliding mode control (SMC) is among the popular approaches for control of systems, especially for unknown nonlinear systems. However, the chattering in SMC is generally a problem that needs to be resolved for better control. A time-varying method is proposed for determining the sliding gain function in the SMC. Two alternative tuning algorithms are proposed for reducing the sliding gain function for systems. The first algorithm is for systems with no noise and disturbance but with or without unmodeled dynamics. The second algorithm is for systems with noise, disturbance, unmodeled dynamics, or any combination of them. Compared with the state-dependent, equivalent-control-dependent, and hysteresis loop methods, the proposed algorithms are more straightforward and easy to implement. The performance of the algorithms is evaluated for five different cases. A 90% to 95% reduction of chattering is achieved for the first algorithm used for systems with sensor dynamics only. By using the second algorithm, the chattering is reduced by 70% to 90% for systems with noise and/or disturbance, and by 25% to 50% for systems with a combination of disturbance, noise, and unmodeled dynamics.
Achievable rates in cognitive radio channels Cognitive radio promises a low-cost, highly flexible alternative to the classic single-frequency band, single-protocol wireless device. By sensing and adapting to its environment, such a device is able to fill voids in the wireless spectrum and can dramatically increase spectral efficiency. In this paper, the cognitive radio channel is defined as a two-sender, two-receiver interference channel in which sender 2 obtains the encoded message sender 1 plans to transmit. We consider two cases: in the genie-aided cognitive radio channel, sender 2 is noncausally presented the data to be transmitted by sender 1 while in the causal cognitive radio channel, the data is obtained causally. The cognitive radio at sender 2 may then choose to transmit simultaneously over the same channel, as opposed to waiting for an idle channel as is traditional for a cognitive radio. Our main result is the development of an achievable region which combines Gel'fand-Pinkser coding with an achievable region construction for the interference channel. In the additive Gaussian noise case, this resembles dirty-paper coding, a technique used in the computation of the capacity of the Gaussian multiple-input multiple-output (MIMO) broadcast channel. Numerical evaluation of the region in the Gaussian noise case is performed, and compared to an inner bound, the interference channel, and an outer bound, a modified Gaussian MIMO broadcast channel. Results are also extended to the case in which the message is causally obtained.
The price of validity in dynamic networks Massive-scale self-administered networks like Peer-to-Peer and Sensor Networks have data distributed across thousands of participant hosts. These networks are highly dynamic with short-lived hosts being the norm rather than an exception. In recent years, researchers have investigated best-effort algorithms to efficiently process aggregate queries (e.g., sum, count, average, minimum and maximum) [6, 13, 21, 34, 35, 37] on these networks. Unfortunately, query semantics for best-effort algorithms are ill-defined, making it hard to reason about guarantees associated with the result returned. In this paper, we specify a correctness condition, single-site validity, with respect to which the above algorithms are best-effort. We present a class of algorithms that guarantee validity in dynamic networks. Experiments on real-life and synthetic network topologies validate performance of our algorithms, revealing the hitherto unknown price of validity.
Power Amplifier Selection for LINC Applications. Linear amplification with nonlinear components (LINC) using a nonisolating combiner has the potential for high efficiency and good linearity. In past work, the interaction between two power amplifiers has been interpreted as a time-varying load presented at the output of amplifiers, and the linearity and efficiency of the LINC system has been evaluated according to how the power amplifiers respond...
An Electro-Magnetic Energy Harvesting System With 190 nW Idle Mode Power Consumption for a BAW Based Wireless Sensor Node. State-of-the-art wireless sensor nodes are mostly supplied by batteries. Such systems have the disadvantage that they are not maintenance free because of the limited lifetime of batteries. Instead, wireless sensor nodes or related devices can be remotely powered. To increase the operating range and applicability of these remotely powered devices an electro-magnetic energy harvester is developed in a 0.13 mu m low cost CMOS technology. This paper presents an energy harvesting system that converts RF power to DC power to supply wireless sensor nodes, active transmitters or related systems with a power consumption up to the mW range. This energy harvesting system is used to power a wireless sensor node from the 900 MHz RF field. The wireless sensor node includes an on-chip temperature sensor and a bulk acoustic wave (BAW) based transmitter. The BAW resonator reduces the startup time of the transmitter to about 2 mu s which reduces the amount of energy needed in one transmission cycle. The maximum output power of the transmitter is 5.4 dBm. The chip contains an ultra-low-power control unit and consumes only 190 nW in idle mode. The required input power is -19.7 dBm.
A 10-Bit 800-MHz 19-mW CMOS ADC A pipelined ADC employs charge-steering op amps to relax the trade-offs among speed, noise, and power consumption. Applying full-rate nonlinearity and gain error calibration, a prototype realized in 65-nm CMOS technology achieves an SNDR of 52.2 dB at an input frequency of 399.2MHz and an FoM of 53 fJ/conversion-step.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.114
0.106667
0.106667
0.106667
0.106667
0.106667
0.106667
0.037778
0
0
0
0
0
0
Design of Single-Topology Continuously Scalable-Conversion-Ratio Switched- Capacitor DC–DC Converters This paper introduces a fundamentally different type of switched-capacitor dc–dc converter with large voltage swing flying capacitors, which is made efficient using advanced multiphasing soft charging. A single-capacitor topology is derived that behaves like a gyrator and achieves a continuously scalable-conversion-ratio with high efficiency. A circuit fabricated in a 28-nm CMOS process demonstrates the working principle of the presented topology and advances the state of the art by achieving a peak efficiency of 93%, and a single-topology 0.9–2.03-V output voltage range with more than 90% efficiency at an input voltage of 2 V.
General Top/Bottom-Plate Charge Recycling Technique for Integrated Switched Capacitor DC-DC Converters. Energy loss due to top/bottom plate parasitic capacitances is one of the factors determining the efficiency of integrated switched capacitor DC/DC converters. This loss is particularly significant when MOS gate or deep trench capacitors are used. We propose a technique for top/bottom-plate charge recycling that can be applied with low overhead independently of the converter architecture. Two examp...
A 20-pW Discontinuous Switched-Capacitor Energy Harvester for Smart Sensor Applications. We present a discontinuous harvesting approach for switch capacitor dc-dc converters that enables ultralow-power energy harvesting. Smart sensor applications rely on ultralow-power energy harvesters to scavenge energy across a wide range of ambient power levels and charge the battery. Based on the key observation that energy source efficiency is higher than charge pump efficiency, we present a dis...
Fully-Integrated Reconfigurable Charge Pump With Two-Dimensional Frequency Modulation for Self-Powered Internet-of-Things Applications In this paper, we propose a fully-integrated reconfigurable charge pump in a 0.18-μm CMOS process; this converter is applicable for self-powered Internet-of-Things applications. The proposed charge pump uses a two-dimensional frequency modulation technique, which combines both the pulse-frequency modulation (PFM) and pulse-skip modulation (PSM) techniques. The PFM technique adjusts the operating frequency of the converter according to the variations in the load current, and the PSM technique regulates the output voltage. The proposed two-dimensional frequency modulation technique can improve the overall power conversion efficiency and the response time of the converter under light load conditions. A photovoltaic cell was chosen as the input source of the proposed converter. To adapt to the variations in the output voltage of a photovoltaic cell under different light illumination intensities, we built a reconfigurable converter core with multiple power conversion ratios of 2, 2.5, and 3 for the regulated output voltage of 1.2 V when the input voltage ranged from 0.53 V to 0.7 V. Our measurement results prove that the proposed capacitive power converter could achieve a peak power conversion efficiency of 80.8%, and the efficiency was more than 70% for the load current that ranged from 10 μA to 620 μA.
Conductance Modulation Techniques in Switched-Capacitor DC-DC Converter for Maximum-Efficiency Tracking and Ripple Mitigation in 22 nm Tri-Gate CMOS Switch conductance modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, in 22 nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switch-size scaling scheme for maximum efficiency tracking across a wide range of voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures and, (ii) a simple active ripple mitigation technique that modulates the gate drive of select MOSFET switches effectively in all conversion modes. Efficiency improvements up to 15% are measured under low output voltage and load conditions. Load-independent output ripple of $ 50 mV is achieved, enabling reduced interleaving. Test chip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
An Indoor Photovoltaic Energy Harvester Using Time-Based MPPT and On-Chip Photovoltaic Cell An indoor photovoltaic (PV) energy harvester using a time-based maximum power point tracking (TBMPPT) circuit and an on-chip PV cell is presented. The TBMPPT circuit selects one of three switched-capacitor DC-DC converters and adjusts the switching frequency to achieve the maximum power. This TBMPPT circuit can also track the light intensity variations. When the TBMPPT circuit is locked, a duty-cycle control technique is used to lower the power. This indoor PV energy harvester is realized in a 0.18μm CMOS process. Its total active area is 2.89mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> wherein the area of the PV cell is 1.436mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> . The measured peak power conversion efficiency (PCE) is 68.3%. This energy harvester can cover the wide input power range of 5μW-500μW and maintain the PCE>50% over the input power range of 10μW-500μW.
Fully-Integrated High-Conversion-Ratio Dual-Output Voltage Boost Converter With MPPT for Low-Voltage Energy Harvesting. This paper proposes a fully-integrated high-conversion-ratio dual-output voltage boost converter (VBC) with maximum power point tracking (MPPT) circuits for low-voltage energy harvesting. The VBC consists of two voltage generators that generate VOUT1 and VOUT2. VOUT1 and VOUT2 are three and nine times higher than the harvester&#39;s output VIN, respectively. VOUT1 is used as a supply voltage for on-ch...
Cache operations by MRU change The performance of set associative caches is analyzed. The method used is to group the cache lines into regions according to their positions in the replacement stacks of a cache, and then to observe how the memory access of a CPU is distributed over these regions. Results from the preserved CPU traces show that the memory accesses are heavily concentrated on the most recently used (MRU) region in the cache. The concept of MRU change is introduced; the idea is to use the event that the CPU accesses a non-MRU line to approximate the time the CPU is changing its working set. The concept is shown to be useful in many aspects of cache design and performance evaluation, such as comparison of various replacement algorithms, improvement of prefetch algorithms, and speedup of cache simulation.
Joint Optimization of Task Scheduling and Image Placement in Fog Computing Supported Software-Defined Embedded System. Traditional standalone embedded system is limited in their functionality, flexibility, and scalability. Fog computing platform, characterized by pushing the cloud services to the network edge, is a promising solution to support and strengthen traditional embedded system. Resource management is always a critical issue to the system performance. In this paper, we consider a fog computing supported software-defined embedded system, where task images lay in the storage server while computations can be conducted on either embedded device or a computation server. It is significant to design an efficient task scheduling and resource management strategy with minimized task completion time for promoting the user experience. To this end, three issues are investigated in this paper: 1) how to balance the workload on a client device and computation servers, i.e., task scheduling, 2) how to place task images on storage servers, i.e., resource management, and 3) how to balance the I/O interrupt requests among the storage servers. They are jointly considered and formulated as a mixed-integer nonlinear programming problem. To deal with its high computation complexity, a computation-efficient solution is proposed based on our formulation and validated by extensive simulation based studies.
Incremental Validation of XML Documents We investigate the incremental validation of XML documents with respect to DTDs and XML Schemas, under updates consisting of element tag renamings, insertions and deletions. DTDs are modeled as extended context-free grammars and XML Schemas are abstracted as "specialized DTDs", allowing to decouple element types from element tags. For DTDs, we exhibit an O(m log n) incremental validation algorithm using an auxiliary structure of size O(n), where n is the size of the document and m the number of updates. For specialized DTDs, we provide an O(m log2 n) incremental algorithm, again using an auxiliary structure of size O(n). This is a significant improvement over brute-force re-validation from scratch.
Fully Monolithic Cellular Buck Converter Design for 3-D Power Delivery A fully monolithic interleaved buck dc-dc point-of-load (PoL) converter has been designed and fabricated in a 0.18-mm SiGe BiCMOS process. Target application of the design is 3-D power delivery for future microprocessors, in which the PoL converter will be vertically integrated with the processor using wafer-level 3-D interconnect technologies. Advantages of 3-D power delivery over conventional discrete voltage regulator modules (VRMs) are discussed. The prototype design, using two interleaved buck converter cells each operating at 200 MHz switching frequency and delivering 500 mA output current, is discussed with a focus on the converter power stage and control loop to highlight the tradeoffs unique to such high-frequency, monolithic designs. Measured steady-state and dynamic responses of the fabricated prototype are presented to demonstrate the ability of such monolithic converters to meet the power delivery requirements of future processors.
Analysis and Design of Passive Polyphase Filters Passive RC polyphase filters (PPFs) are analyzed in detail in this paper. First, a method to calculate the output signals of an n-stage PPF is presented. As a result, all relevant properties of PPFs, such as amplitude and phase imbalance and loss, are calculated. The rules for optimal pole frequency planning to maximize the image-reject ratio provided by a PPF are given. The loss of PPF is divided into two factors, namely the intrinsic loss caused by the PPF itself and the loss caused by termination impedances. Termination impedances known a priori can be used to derive such component values, which minimize the overall loss. The effect of parasitic capacitance and component value deviation are analyzed and discussed. The method of feeding the input signal to the first PPF stage affects the mechanisms of the whole PPF. As a result, two slightly different PPF topologies can be distinguished, and they are separately analyzed and compared throughout this paper. A design example is given to demonstrate the developed design procedure.
Electromagnetic regenerative suspension system for ground vehicles This paper considers an electromagnetic regenerative suspension system (ERSS) that recovers the kinetic energy originated from vehicle vibration, which is previously dissipated in traditional shock absorbers. It can also be used as a controllable damper that can improve the vehicle's ride and handling performance. The proposed electromagnetic regenerative shock absorbers (ERSAs) utilize a linear or a rotational electromagnetic generator to convert the kinetic energy from suspension vibration into electricity, which can be used to reduce the load on the alternator so as to improve fuel efficiency. A complete ERSS is discussed here that includes the regenerative shock absorber, the power electronics for power regulation and suspension control, and an electronic control unit (ECU). Different shock absorber designs are proposed and compared for simplicity, efficiency, energy density, and controlled suspension performances. Both simulation and experiment results are presented and discussed.
A 12-Bit Dynamic Tracking Algorithm-Based SAR ADC With Real-Time QRS Detection A 12-bit successive approximation register (SAR) ADC based on dynamic tracking algorithm and a real-time QRS-detection algorithm are proposed. The dynamic tracking algorithm features two tracking windows which are adjacent to prediction interval. This algorithm is able to track down the input signal's variation range and automatically adjust the subrange interval and update prediction code. QRS-complex detection algorithm integrates synchronous time sequential ADC and realtime QRS-detector. The chip is fabricated in a standard 0.13 μm CMOS process with a 0.6 V supply. Measurement results show that proposed ADC exhibits 10.72 effective number of bit (ENOB) and 79.63 dB spur-free dynamic range (SFDR) at 10k Hz sample rate given 41.5 Hz sinusoid input. The DNL and INL are bounded at -0.6/0.62 LSB and -0.67/1.43 LSBs. The ADC achieves FoM of 48 fJ/conversion-step at the best case. Also, the prototype is experimented with ECG signal input and extracts the heart beat signal successfully.
1.066667
0.066667
0.066667
0.066667
0.066667
0.066667
0.033333
0
0
0
0
0
0
0
Temporal Thermal Covert Channels in Cloud FPGAs. With increasing interest in Cloud FPGAs, such as Amazon's EC2 F1 instances or Microsoft's Azure with Catapult servers, FPGAs in cloud computing infrastructures can become targets for information leakages via convert channel communication. Cloud FPGAs leverage temporal sharing of the FPGA resources between users. This paper shows that heat generated by one user can be observed by another user who later uses the same FPGA. The covert data transfer can be achieved through simple on-off keying (OOK) and use of multiple FPGA boards in parallel significantly improves data throughput. The new temporal thermal covert channel is demonstrated on Microsoft's Catapult servers with FPGAs running remotely in the Texas Advanced Computing Center (TACC). A number of defenses against the new temporal thermal covert channel are presented at the end of the paper.
Energy Efficient Run-Time Incremental Mapping for 3-D Networks-on-Chip 3-D Networks-on-Chip(NoC) emerge as a potent solution to address both the interconnection and design complexity problems facing future Multiprocessor System-on-Chips(MPSoCs).Effective run-time mapping on such 3-D NoC-based MPSoCs can be quite challenging,as the arrival order and task graphs of the target applications are typically not known a priori,which can be further complicated by stringent energy requirements for NoC systems.This paper thus presents an energy-aware run-time incremental mapping algorithm(ERIM) for 3-D NoC which can minimize the energy consumption due to the data communications among processor cores,while reducing the fragmentation effect on the incoming applications to be mapped,and simultaneously satisfying the thermal constraints imposed on each incoming application.Specifically,incoming applications are mapped to cuboid tile regions for lower energy consumption of communication and the minimal routing.Fragment tiles due to system fragmentation can be gleaned for better resource utilization.Extensive experiments have been conducted to evaluate the performance of the proposed algorithm ERIM,and the results are compared against the optimal mapping algorithm(branch-and-bound) and two heuristic algorithms(TB and TL).The experiments show that ERIM outperforms TB and TL methods with significant energy saving(more than 10%),much reduced average response time,and improved system utilization.
A Security Framework for NoC Using Authenticated Encryption and Session Keys Abstract Network on Chip (NoC) is an emerging solution to the existing scalability problems with System on Chip (SoC). However, it is exposed to security threats like extraction of secret information from IP cores. In this paper we present an Authenticated Encryption (AE)-based security framework for NoC based systems. The security framework resides in Network Interface (NI) of every IP core allowing secure communication among such IP cores. The secure cores can communicate using permanent keys whereas temporary session keys are used for communication between secure and non-secure cores. A traffic limiting counter is used to prevent bandwidth denial and access rights table avoids unauthorized memory accesses. We simulated and implemented our framework using Verilog/VHDL modules on top of NoCem emulator. The results showed tolerable area overhead and did not affect the network performance apart from some initial latency.
Secure Model Checkers for Network-on-Chip (NoC) Architectures. As chip multiprocessors (CMPs) are becoming more susceptible to process variation, crosstalk, and hard and soft errors, emerging threats from rogue employees in a compromised foundry are creating new vulnerabilities that could undermine the integrity of our chips with malicious alterations. As the Network-on-Chip (NoC) is a focal point of sensitive data transfer and critical device coordination, there is an urgent demand for secure and reliable communication. In this paper we propose Secure Model Checkers (SMCs), a real-time solution for control logic verification and functional correctness in the micro-architecture to detect Hardware Trojan (HT) induced denial-of-service attacks and improve reliability. In our evaluation, we show that SMCs provides significant security enhancements in real-time with only 1.5% power and 1.1% area overhead penalty in the micro-architecture.
Information Hiding behind Approximate Computation There are many interesting advances in approximate computing recently targeting the energy efficiency in system design and execution. The basic idea is to trade computation accuracy for power and energy during all phases of the computation, from data to algorithm and hardware implementation. In this paper, we explore how to utilize approximate computing for security based information hiding. More specifically, we will demonstrate with examples the potential of embedding information in approximate hardware and approximate data, as well as during approximate computation. We analyze both the security vulnerabilities that this may cause and the potential security applications enabled by such information hiding. We argue that information could be hidden behind approximate computation without compromising the computation accuracy or energy efficiency.
An Evaluation of High-Level Mechanistic Core Models Large core counts and complex cache hierarchies are increasing the burden placed on commonly used simulation and modeling techniques. Although analytical models provide fast results, they do not apply to complex, many-core shared-memory systems. In contrast, detailed cycle-level simulation can be accurate but also tends to be slow, which limits the number of configurations that can be evaluated. A middle ground is needed that provides for fast simulation of complex many-core processors while still providing accurate results. In this article, we explore, analyze, and compare the accuracy and simulation speed of high-abstraction core models as a potential solution to slow cycle-level simulation. We describe a number of enhancements to interval simulation to improve its accuracy while maintaining simulation speed. In addition, we introduce the instruction-window centric (IW-centric) core model, a new mechanistic core model that bridges the gap between interval simulation and cycle-accurate simulation by enabling high-speed simulations with higher levels of detail. We also show that using accurate core models like these are important for memory subsystem studies, and that simple, naive models, like a one-IPC core model, can lead to misleading and incorrect results and conclusions in practical design studies. Validation against real hardware shows good accuracy, with an average single-core error of 11.1&percnt; and a maximum of 18.8&percnt; for the IW-centric model with a 1.5× slowdown compared to interval simulation.
An exploration of L2 cache covert channels in virtualized environments Recent exploration into the unique security challenges of cloud computing have shown that when virtual machines belonging to different customers share the same physical machine, new forms of cross-VM covert channel communication arise. In this paper, we explore one of these threats, L2 cache covert channels, and demonstrate the limits of these this threat by providing a quantification of the channel bit rates and an assessment of its ability to do harm. Through progressively refining models of cross-VM covert channels from the derived maximums, to implementable channels in the lab, and finally in Amazon EC2 itself we show how a variety of factors impact our ability to create effective channels. While we demonstrate a covert channel with considerably higher bit rate than previously reported, we assess that even at such improved rates, the harm of data exfiltration from these channels is still limited to the sharing of small, if important, secrets such as private keys.
The GPU Computing Era GPU computing is at a tipping point, becoming more widely used in demanding consumer applications and high-performance computing. This article describes the rapid evolution of GPU architectures—from graphics processors to massively parallel many-core multiprocessors, recent developments in GPU computing architectures, and how the enthusiastic adoption of CPU+GPU coprocessing is accelerating parallel applications.
Disk Paxos We present an algorithm, called Disk Paxos, for implementing a reliable distributed system with a network of processors and disks. Like the original Paxos algorithm, Disk Paxos maintains consistency in the presence of arbitrary non-Byzantine faults. Progress can be guaranteed as long as a majority of the disks are available, even if all processors but one have failed.
Winnowing: local algorithms for document fingerprinting Digital content is for copying: quotation, revision, plagiarism, and file sharing all create copies. Document fingerprinting is concerned with accurately identifying copying, including small partial copies, within large sets of documents.We introduce the class of local document fingerprinting algorithms, which seems to capture an essential property of any finger-printing technique guaranteed to detect copies. We prove a novel lower bound on the performance of any local algorithm. We also develop winnowing, an efficient local fingerprinting algorithm, and show that winnowing's performance is within 33% of the lower bound. Finally, we also give experimental results on Web data, and report experience with MOSS, a widely-used plagiarism detection service.
DySER: Unifying Functionality and Parallelism Specialization for Energy-Efficient Computing The DySER (Dynamically Specializing Execution Resources) architecture supports both functionality specialization and parallelism specialization. By dynamically specializing frequently executing regions and applying parallelism mechanisms, DySER provides efficient functionality and parallelism specialization. It outperforms an out-of-order CPU, Streaming SIMD Extensions (SSE) acceleration, and GPU acceleration while consuming less energy. The full-system field-programmable gate array (FPGA) prototype of DySER integrated into OpenSparc demonstrates a practical implementation.
Design and implementation of a reconfigurable FIR filter Finite impulse response (FIR) filters are very important blocks in digital communication systems. Many efforts have been made to improve the filter performance, e.g., less hardware and higher speed. In addition, software radio has recently gained much attention due to the need for integrated and reconfigurable communication systems. To this end, reconfigurability has become an important issue for the future filter design. In this paper, we present a digit-reconfigurable FIR filter architecture with the finest granularity. The proposed architecture is implemented in a single-poly quadruple-metal 0.35-μm CMOS technology. Measurement results show that the fabricated chip consumes 16.5 mW of power when operating at 86 MHz under 2.5 V.
Implementation of LTE SC-FDMA on the USRP2 software defined radio platform In this paper we discuss the implementation of a Single Carrier Frequency Division Multiple Access (SC-FDMA) transceiver running over the Universal Software Radio Peripheral 2 (USRP2). SC-FDMA is the air interface which has been selected for the uplink in the latest Long Term Evolution (LTE) standard. In this paper we derive an AWGN channel model for SC-FDMA transmission, which is useful for benchmarking experimental results. In our implementation, we deal with signal scaling, equalization and partial synchronization to realize SC-FDMA transmission over a noisy channel at rates up to 5.184 Mbit/s. Experimental results on the Bit Error Rate (BER) versus Signal-to-Noise Ratio (SNR) are presented and compared to theoretical and simulated performance.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.1
0.1
0.1
0.1
0.1
0.05
0.007692
0
0
0
0
0
0
0
Software Radio Reconfigurable Hardware System (SHaRe) Recent requirements and evolution of personal communications systems will tend to increase the number of applications that will run over the same hardware/software. While an option is providing this platform with all the algorithms needed, a more suitable one is providing such a platform with the capacity to evolve, along time, from one function to another. Here we present a hardware platform with self reconfiguration abilities depending on system demand. The reconfiguration can be partial or complete within a short time to cope with the current application. This capability has an important effect on the software radio techniques applied to terminals and base stations as it will add extra value through a quick support to new standards and the incorporation of new software-designed applications.
The Cost Of An Abstraction Layer On Fpga Devices For Software Radio Applications Software Radio applications require a framework to develop and deploy applications, especially those related to radio infrastructure. It is interesting that a given application may be executed on any software radio. But, since hardware platforms used in this context will have multiple architectures and devices, a software layer to make applications independent from hardware is mandatory. Ad-hoc software for a given hardware platform may produce the best software performance. Conversely, when software is not targeted to any concrete platform the lost of performance may be excessive and the overhead introduced by any platform-dependent library could become intolerable. In this paper the resource utilization of a software radio application using a simple hardware abstraction layer is studied and compared to an ad-hoc implementation to make an assessment of the introduced overhead. The particularity of the hardware abstraction layer is that it runs on a platform which only processors are FPGA devices.
A soft radio architecture for reconfigurable platforms While many soft/software radio architectures have been suggested and implemented, there remains a lack of a formal design methodology that can be used to design and implement these radios. This article presents a unified architecture for the design of soft radios on a reconfigurable platform called the layered radio architecture. The layered architecture makes it possible to incorporate all of the features of a software radio while minimizing complexity issues. The layered architecture also enables a methodology for incorporating changes and updates into the system. An example implementation of the layered architecture on actual hardware is presented
The software radio architecture As communications technology continues its rapid transition from analog to digital, more functions of contemporary radio systems are implemented in software, leading toward the software radio. This article provides a tutorial review of software radio architectures and technology, highlighting benefits, pitfalls, and lessons learned. This includes a closer look at the canonical functional partitioning of channel coding into antenna, RF, IF, baseband, and bitstream segments. A more detailed look at the estimation of demand for critical resources is key. This leads to a discussion of affordable hardware configurations, the mapping of functions to component hardware, and related software tools. This article then concludes with a brief treatment of the economics and likely future directions of software radio technology
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
How to share a secret In this paper we show how to divide data D into n pieces in such a way that D is easily reconstructable from any k pieces, but even complete knowledge of k - 1 pieces reveals absolutely no information about D. This technique enables the construction of robust key management schemes for cryptographic systems that can function securely and reliably even when misfortunes destroy half the pieces and security breaches expose all but one of the remaining pieces.
A new approach to state observation of nonlinear systems with delayed output The article presents a new approach for the construction of a state observer for nonlinear systems when the output measurements are available for computations after a nonnegligible time delay. The proposed observer consists of a chain of observation algorithms reconstructing the system state at different delayed time instants (chain observer). Conditions are given for ensuring global exponential convergence to zero of the observation error for any given delay in the measurements. The implementation of the observer is simple and computer simulations demonstrate its effectiveness.
Theory and Applications of Robust Optimization In this paper we survey the primary research, both theoretical and applied, in the area of robust optimization (RO). Our focus is on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying prominent theoretical results of RO, we also present some recent results linking RO to adaptable models for multistage decision-making problems. Finally, we highlight applications of RO across a wide spectrum of domains, including finance, statistics, learning, and various areas of engineering.
Cache attacks and countermeasures: the case of AES We describe several software side-channel attacks based on inter-process leakage through the state of the CPU’s memory cache. This leakage reveals memory access patterns, which can be used for cryptanalysis of cryptographic primitives that employ data-dependent table lookups. The attacks allow an unprivileged process to attack other processes running in parallel on the same processor, despite partitioning methods such as memory protection, sandboxing and virtualization. Some of our methods require only the ability to trigger services that perform encryption or MAC using the unknown key, such as encrypted disk partitions or secure network links. Moreover, we demonstrate an extremely strong type of attack, which requires knowledge of neither the specific plaintexts nor ciphertexts, and works by merely monitoring the effect of the cryptographic process on the cache. We discuss in detail several such attacks on AES, and experimentally demonstrate their applicability to real systems, such as OpenSSL and Linux’s dm-crypt encrypted partitions (in the latter case, the full key can be recovered after just 800 writes to the partition, taking 65 milliseconds). Finally, we describe several countermeasures for mitigating such attacks.
Random walks in peer-to-peer networks: algorithms and evaluation We quantify the effectiveness of random walks for searching and construction of unstructured peer-to-peer (P2P) networks. We have identified two cases where the use of random walks for searching achieves better results than flooding: (a) when the overlay topology is clustered, and (b) when a client re-issues the same query while its horizon does not change much. Related to the simulation of random walks is also the distributed computation of aggregates, such as averaging. For construction, we argue that an expander can be maintained dynamically with constant operations per addition. The key technical ingredient of our approach is a deep result of stochastic processes indicating that samples taken from consecutive steps of a random walk on an expander graph can achieve statistical properties similar to independent sampling. This property has been previously used in complexity theory for construction of pseudorandom number generators. We reveal another facet of this theory and translate savings in random bits to savings in processing overhead.
Online design bug detection: RTL analysis, flexible mechanisms, and evaluation Higher level of resource integration and the addition of new features in modern multi-processors put a significant pressure on their verification. Although a large amount of resources and time are devoted to the verification phase of modern processors, many design bugs escape the verification process and slip into processors operating in the field. These design bugs often lead to lower quality products, lower customer satisfaction, diminishing brand/company reputation, or even expensive product recalls.
Power saving of a dynamic width controller for a monolithic current-mode CMOS DC-DC converter We propose the dynamic power MOS width controlling technique and the adaptive gate driver voltage technique to find out the better approach to power saving in DC-DC converters. It demonstrates that the dynamic power MOS width controlling technique has much improvement in power consumption than that of the adaptive gate driver voltage technique when the load current is heavy or light. After the dynamic power MOS width modification, the simulation results show that the efficiency of current-mode DC-DC buck converter can be improved from 92% to about 98% in heavy load and from 15% to about 16.3% in light load. However, the adaptive gate driver voltage technique has only little improvement of power saving. It means that the dynamic width controller is the better approach to power saving in the DC-DC converter.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.10525
0.1105
0.015857
0.000528
0
0
0
0
0
0
0
0
0
0
An Overview of Efficient Interconnection Networks for Deep Neural Network Accelerators Deep Neural Networks (DNNs) have shown significant advantages in many domains, such as pattern recognition, prediction, and control optimization. The edge computing demand in the Internet-of-Things (IoTs) era has motivated many kinds of computing platforms to accelerate DNN operations. However, due to the massive parallel processing, the performance of the current large-scale artificial neural network is often limited by the huge communication overheads and storage requirements. As a result, efficient interconnection and data movement mechanisms for future on-chip artificial intelligence (AI) accelerators are worthy of study. Currently, a large body of research aims to find an efficient on-chip interconnection to achieve low-power and high-bandwidth DNN computing. This paper provides a comprehensive investigation of the recent advances in efficient on-chip interconnection and design methodology of the DNN accelerator design. First, we provide an overview of the different interconnection methods on the DNN accelerator. Then, the interconnection methods on the non-ASIC DNN accelerator will be discussed. On the other hand, with the flexible interconnection, the DNN accelerator can support different computing flow, which increases the computing flexibility. With this motivation, reconfigurable DNN computing with flexible on-chip interconnection will be investigated in this paper. Finally, we investigate the emerging interconnection technologies (e.g., in/near-memory processing) for the DNN accelerator design. This paper systematically investigates the interconnection networks in modern DNN accelerator designs. With this article, the readers are able to: 1) understand the interconnection design for DNN accelerators; 2) evaluate DNNs with different on-chip interconnection; 3) familiarize with the trade-offs under different interconnections.
MAESTRO: A Data-Centric Approach to Understand Reuse, Performance, and Hardware Cost of DNN Mappings. The efficiency of an accelerator depends on three factors-mapping, deep neural network (DNN) layers, and hardware-constructing extremely complicated design space of DNN accelerators. To demystify such complicated design space and guide the DNN accelerator design for better efficiency, we propose an analytical cost model, MAESTRO. MAESTRO receives DNN model description and hardware resources inform...
CoSA: Scheduling by Constrained Optimization for Spatial Accelerators Recent advances in Deep Neural Networks (DNNs) have led to active development of specialized DNN accelerators, many of which feature a large number of processing elements laid out spatially, together with a multi-level memory hierarchy and flexible interconnect. While DNN accelerators can take advantage of data reuse and achieve high peak throughput, they also expose a large number of runtime parameters to the programmers who need to explicitly manage how computation is scheduled both spatially and temporally. In fact, different scheduling choices can lead to wide variations in performance and efficiency, motivating the need for a fast and efficient search strategy to navigate the vast scheduling space.To address this challenge, we present CoSA, a constrained-optimization-based approach for scheduling DNN accelerators. As opposed to existing approaches that either rely on designers’ heuristics or iterative methods to navigate the search space, CoSA expresses scheduling decisions as a constrained-optimization problem that can be deterministically solved using mathematical optimization techniques. Specifically, CoSA leverages the regularities in DNN operators and hardware to formulate the DNN scheduling space into a mixed-integer programming (MIP) problem with algorithmic and architectural constraints, which can be solved to automatically generate a highly efficient schedule in one shot. We demonstrate that CoSA-generated schedules significantly outperform state-of-the-art approaches by a geometric mean of up to 2.5× across a wide range of DNN networks while improving the time-to-solution by 90×.
Sparseloop: An Analytical, Energy-Focused Design Space Exploration Methodology for Sparse Tensor Accelerators This paper presents Sparseloop, the first infrastructure that implements an analytical design space exploration methodology for sparse tensor accelerators. Sparseloop comprehends a wide set of architecture specifications including various sparse optimization features such as compressed tensor storage. Using these specifications, Sparseloop can calculate a design&#39;s energy efficiency while accountin...
Layerwise Buffer Voltage Scaling for Energy-Efficient Convolutional Neural Network In order to effectively reduce buffer energy consumption, which constitutes a significant part of the total energy consumption in a convolutional neural network (CNN), it is useful to apply different amounts of energy conservation effort to the different levels of a CNN as the buffer energy to total energy usage ratios can differ quite substantially across the layers of a CNN. This article proposes layerwise buffer voltage scaling as an effective technique for reducing buffer access energy. Error-resilience analysis, including interlayer effects, conducted during design-time is used to determine the specific buffer supply voltage to be used for each layer of a CNN. Then these layer-specific buffer supply voltages are used in the CNN for image classification inference. Error injection experiments with three different types of CNN architectures show that, with this technique, the buffer access energy and overall system energy can be reduced by up to 68.41% and 33.68%, respectively, without sacrificing image classification accuracy.
A Local Computing Cell and 6T SRAM-Based Computing-in-Memory Macro With 8-b MAC Operation for Edge AI Chips This article presents a computing-in-memory (CIM) structure aimed at improving the energy efficiency of edge devices running multi-bit multiply-and-accumulate (MAC) operations. The proposed scheme includes a 6T SRAM-based CIM (SRAM-CIM) macro capable of: 1) weight-bitwise MAC (WbwMAC) operations to expand the sensing margin and improve the readout accuracy for high-precision MAC operations; 2) a c...
Domain-specific hardware accelerators DSAs gain efficiency from specialization and performance from parallelism.
Achievable rates in cognitive radio channels Cognitive radio promises a low-cost, highly flexible alternative to the classic single-frequency band, single-protocol wireless device. By sensing and adapting to its environment, such a device is able to fill voids in the wireless spectrum and can dramatically increase spectral efficiency. In this paper, the cognitive radio channel is defined as a two-sender, two-receiver interference channel in which sender 2 obtains the encoded message sender 1 plans to transmit. We consider two cases: in the genie-aided cognitive radio channel, sender 2 is noncausally presented the data to be transmitted by sender 1 while in the causal cognitive radio channel, the data is obtained causally. The cognitive radio at sender 2 may then choose to transmit simultaneously over the same channel, as opposed to waiting for an idle channel as is traditional for a cognitive radio. Our main result is the development of an achievable region which combines Gel'fand-Pinkser coding with an achievable region construction for the interference channel. In the additive Gaussian noise case, this resembles dirty-paper coding, a technique used in the computation of the capacity of the Gaussian multiple-input multiple-output (MIMO) broadcast channel. Numerical evaluation of the region in the Gaussian noise case is performed, and compared to an inner bound, the interference channel, and an outer bound, a modified Gaussian MIMO broadcast channel. Results are also extended to the case in which the message is causally obtained.
MorphoSys: An Integrated Reconfigurable System for Data-Parallel and Computation-Intensive Applications This paper introduces MorphoSys, a reconfigurable computing system developed to investigate the effectiveness of combining reconfigurable hardware with general-purpose processors for word-level, computation-intensive applications. MorphoSys is a coarse-grain, integrated, and reconfigurable system-on-chip, targeted at high-throughput and data-parallel applications. It is comprised of a reconfigurable array of processing cells, a modified RISC processor core, and an efficient memory interface unit. This paper describes the MorphoSys architecture, including the reconfigurable processor array, the control processor, and data and configuration memories. The suitability of MorphoSys for the target application domain is then illustrated with examples such as video compression, data encryption and target recognition. Performance evaluation of these applications indicates improvements of up to an order of magnitude (or more) on MorphoSys, in comparison with other systems.
Pinning adaptive synchronization of a general complex dynamical network There are two challenging fundamental questions in pinning control of complex networks: (i) How many nodes should a network with fixed network structure and coupling strength be pinned to reach network synchronization? (ii) How much coupling strength should a network with fixed network structure and pinning nodes be applied to realize network synchronization? To fix these two questions, we propose a general complex dynamical network model and then further investigate its pinning adaptive synchronization. Based on this model, we attain several novel adaptive synchronization criteria which indeed give the positive answers to these two questions. That is, we provide a simply approximate formula for estimating the detailed number of pinning nodes and the magnitude of the coupling strength for a given general complex dynamical network. Here, the coupling-configuration matrix and the inner-coupling matrix are not necessarily symmetric. Moreover, our pinning adaptive controllers are rather simple compared with some traditional controllers. A Barabási–Albert network example is finally given to show the effectiveness of the proposed synchronization criteria.
Enabling open-source cognitively-controlled collaboration among software-defined radio nodes Software-defined radios (SDRs) are now recognized as a key building block for future wireless communications. We have spent the past year enhancing existing open software to create a software-defined data radio. This radio extends the notion of software-defined behavior to higher layers in the protocol stack: most importantly through the media access layer. Our particular approach to the problem has been guided by the desire to allow fine-grained cognitive control of the radio. We describe our system, Adaptive Dynamic Radio Open-source Intelligent Team (ADROIT).
Modeling of software radio aspects by mapping of SDL and CORBA With the evolution of 3rd generation mobile communications standardization, the software radio concept has the potential to offer a pragmatic solution - a software implementation that allows the mobile terminal to adapt dynamically to its radio environment. The mapping of SDL and CORBA mechanisms is introduced, in order to provide a generic platform for the implementation of future mobile services, supporting standardized interfaces and manufacturer platform independent object and service functionality description. For the functional entity diagram model, it is proposed that the functional entities be designed as objects, the functional entities group as 'open' object oriented SDL platforms, and the interfaces between them as CORBA IDLs, communicating via the ORB in a generic implementation and location independent way. The functional entity groups are proposed to be modeled as SDL block types, while the functional entities and sub-entities as SDL process and service types. The objects interact with each other like client or server objects requesting or receiving services from other objects. Every object has a CORBA IDL interface, which allows every component to be distributed in an optimum way by providing a standardized infrastructure, ensuring interoperability, flexibility, reusability, transparency and management capabilities.
PuDianNao: A Polyvalent Machine Learning Accelerator Machine Learning (ML) techniques are pervasive tools in various emerging commercial applications, but have to be accommodated by powerful computer systems to process very large data. Although general-purpose CPUs and GPUs have provided straightforward solutions, their energy-efficiencies are limited due to their excessive supports for flexibility. Hardware accelerators may achieve better energy-efficiencies, but each accelerator often accommodates only a single ML technique (family). According to the famous No-Free-Lunch theorem in the ML domain, however, an ML technique performs well on a dataset may perform poorly on another dataset, which implies that such accelerator may sometimes lead to poor learning accuracy. Even if regardless of the learning accuracy, such accelerator can still become inapplicable simply because the concrete ML task is altered, or the user chooses another ML technique. In this study, we present an ML accelerator called PuDianNao, which accommodates seven representative ML techniques, including k-means, k-nearest neighbors, naive bayes, support vector machine, linear regression, classification tree, and deep neural network. Benefited from our thorough analysis on computational primitives and locality properties of different ML techniques, PuDianNao can perform up to 1056 GOP/s (e.g., additions and multiplications) in an area of 3.51 mm^2, and consumes 596 mW only. Compared with the NVIDIA K20M GPU (28nm process), PuDianNao (65nm process) is 1.20x faster, and can reduce the energy by 128.41x.
A Data-Compressive Wired-OR Readout for Massively Parallel Neural Recording. Neural interfaces of the future will be used to help restore lost sensory, motor, and other capabilities. However, realizing this futuristic promise requires a major leap forward in how electronic devices interface with the nervous system. Next generation neural interfaces must support parallel recording from tens of thousands of electrodes within the form factor and power budget of a fully implan...
1.071111
0.08
0.08
0.08
0.066667
0.033333
0.004762
0
0
0
0
0
0
0
A Design Procedure for All-Digital Phase-Locked Loops Based on a Charge-Pump Phase-Locked-Loop Analogy In this brief, a systematic design procedure for a second-order all-digital phase-locked loop (PLL) is proposed. The design procedure is based on the analogy between a type-II second-order analog PLL and an all-digital PLL. The all-digital PLL design inherits the frequency response and stability charac- teristics of the analog prototype PLL. Index Terms—All-digital phase-locked loop (PLL), bilinear transform, digital loop filter, digitally controlled oscillator.
Modeling and Design of Multilevel Bang–Bang CDRs in the Presence of ISI and Noise Multilevel clock-and-data recovery (CDR) systems are analyzed, modeled, and designed. A stochastic analysis provides probability density functions that are used to estimate the effect of intersymbol interference (ISI) and additive white noise on the characteristics of the phase detector (PD) in the CDR. A slope detector based novel multilevel bang-bang CDR architecture is proposed and modeled usin...
Symbol rate timing recovery for higher order partial response channels This paper provides a framework for analyzing and comparing timing recovery schemes for higher order partial response (PR) channels. Several classes of timing recovery schemes are analyzed. Timing recovery loops employing timing gradients or phase detectors derived from the minimum mean-square error (MMSE) criterion, the maximum likelihood (ML) criterion, and the timing function approach of Mueller and Muller (1976) (MRI) are analyzed and compared. The paper formulates and analyzes MMSE timing recovery in the context of a slope look-up table (SLT), which is amenable for an efficient implementation. The properties and performance of the SLT-based timing loop are compared with the ML and MM loops. Analysis and time step simulations for a practical 16-state PR magnetic recording channel show that the output noise jitter of the ML phase detector is worse than that of the SLT-based phase detector. This is primarily due to the presence of self-noise in the ML detector. Consequently, the SLT-based phase detector is to be preferred. In comparing the SLT and MM based timing loops, it is found that both schemes have similar jitter performance
A 1.41pJ/b 224Gb/s PAM-4 SerDes Receiver with 31dB Loss Compensation The emergence of cloud computing, machine learning, and artificial intelligence is gradually saturating network workloads, necessitating rapid growth in datacenter bandwidth, which approximately doubles every 3–4 years. New electrical interfaces that demand dramatic increases in SerDes transceiver speed are being developed to support this. This paper presents a power-efficient 224Gb/s-PAM-4 ADC-ba...
A Digital Clock and Data Recovery Architecture for Multi-Gigabit/s Binary Links In this tutorial paper, we present a general architecture for digital clock and data recovery (CDR) for high-speed binary links. The architecture is based on replacing the analog loop filter and voltage-controlled oscillator (VCO) in a typical analog phase-locked loop (PLL)-based CDR with digital components. We provide a linearized analysis of the bang-bang phase detector and CDR loop including th...
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Cognitive radio: brain-empowered wireless communications Cognitive radio is viewed as a novel approach for improving the utilization of a precious natural resource: the radio electromagnetic spectrum. The cognitive radio, built on a software-defined radio, is defined as an intelligent wireless communication system that is aware of its environment and uses the methodology of understanding-by-building to learn from the environment and adapt to statistical variations in the input stimuli, with two primary objectives in mind: · highly reliable communication whenever and wherever needed; · efficient utilization of the radio spectrum. Following the discussion of interference temperature as a new metric for the quantification and management of interference, the paper addresses three fundamental cognitive tasks. 1) Radio-scene analysis. 2) Channel-state estimation and predictive modeling. 3) Transmit-power control and dynamic spectrum management. This work also discusses the emergent behavior of cognitive radio.
Planning as heuristic search In the AIPS98 Planning Contest, the hsp planner showed that heuristic search planners can be competitive with state-of-the-art Graphplan and sat planners. Heuristic search planners like hsp transform planning problems into problems of heuristic search by automatically extracting heuristics from Strips encodings. They differ from specialized problem solvers such as those developed for the 24-Puzzle and Rubik's Cube in that they use a general declarative language for stating problems and a general mechanism for extracting heuristics from these representations. In this paper, we study a family of heuristic search planners that are based on a simple and general heuristic that assumes that action preconditions are independent. The heuristic is then used in the context of best-first and hill-climbing search algorithms, and is tested over a large collection of domains. We then consider variations and extensions such as reversing the direction of the search for speeding node evaluation, and extracting information about propositional invariants for avoiding dead-ends. We analyze the resulting planners, evaluate their performance, and explain when they do best. We also compare the performance of these planners with two state-of-the-art planners, and show that the simplest planner based on a pure best-first search yields the most solid performance over a large set of problems. We also discuss the strengths and limitations of this approach, establish a correspondence between heuristic search planning and Graphplan, and briefly survey recent ideas that can reduce the current gap in performance between general heuristic search planners and specialized solvers.
Probabilistic neural networks By replacing the sigmoid activation function often used in neural networks with an exponential function, a probabilistic neural network (PNN) that can compute nonlinear decision boundaries which approach the Bayes optimal is formed. Alternate activation functions having similar properties are also discussed. A fourlayer neural network of the type proposed can map any input pattern to any number of classifications. The decision boundaries can be modified in real-time using new data as they become available, and can be implemented using artificial hardware “neurons” that operate entirely in parallel. Provision is also made for estimating the probability and reliability of a classification as well as making the decision. The technique offers a tremendous speed advantage for problems in which the incremental adaptation time of back propagation is a significant fraction of the total computation time. For one application, the PNN paradigm was 200,000 times faster than back-propagation.
Towards a higher-order synchronous data-flow language The paper introduces a higher-order synchronous data-flow language in which communication channels may themselves transport programs. This provides a mean to dynamically reconfigure data-flow processes. The language comes as a natural and strict extension of both lustre and lucy. This extension is conservative, in the sense that a first-order restriction of the language can receive the same semantics.We illustrate the expressivity of the language with some examples, before giving the formal semantics of the underlying calculus. The language is equipped with a polymorphic type system allowing types to be automatically inferred and a clock calculus rejecting programs for which synchronous execution cannot be statically guaranteed. To our knowledge, this is the first higher-order synchronous data-flow language where stream functions are first class citizens.
An almost necessary and sufficient condition for robust stability of closed-loop systems with disturbance observer The disturbance observer (DOB)-based controller has been widely employed in industrial applications due to its powerful ability to reject disturbances and compensate plant uncertainties. In spite of various successful applications, no necessary and sufficient condition for robust stability of the closed loop systems with the DOB has been reported in the literature. In this paper, we present an almost necessary and sufficient condition for robust stability when the Q-filter has a sufficiently small time constant. The proposed condition indicates that robust stabilization can be achieved against arbitrarily large (but bounded) uncertain parameters, provided that an outer-loop controller stabilizes the nominal system, and uncertain plant is of minimum phase.
A MIMO decoder accelerator for next generation wireless communications In this paper, we present a multi-input-multi-output (MIMO) decoder accelerator architecture that offers versatility and reprogrammability while maintaining a very high performance-cost metric. The accelerator is meant to address the MIMO decoding bottlenecks associated with the convergence of multiple high-speed wireless standards onto a single device. It is scalable in the number of antennas, bandwidth, modulation format, and most importantly, present and emerging decoder algorithms. It features a Harvard-like architecture with complex vector operands and a deeply pipelined fixed-point complex arithmetic processing unit. When implemented on a Xilinx Virtex-4 LX200FF1513 field-programmable gate array (FPGA), the design occupied 43% of overall FPGA resources. The accelerator shows an advantage of up to three orders of magnitude (1000 times) in power-delay product for typical MIMO decoding operations relative to a general purpose DSP. When compared to dedicated application-specific IC (ASIC) implementations of mmse MIMO decoders, the accelerator showed a degradation of 340%-17%, depending on the actual ASIC being considered. In order to optimize the design for both speed and area, specific challenges had to be overcome. These include: definition of the processing units and their interconnection; proper dynamic scaling of the signal; and memory partitioning and parallelism.
3.4 A 36Gb/s PAM4 transmitter using an 8b 18GS/S DAC in 28nm CMOS At data rates beyond 10Gb/s, most wireline links employ NRZ signaling. Serial NRZ links as high as 56Gb/s and 60Gb/s have been reported [1]. Nevertheless, as the rate increases, the constraints imposed by the channel, package, and die become more severe and do not benefit from process scaling in the same fashion that circuit design does. Reflections from impedance discontinuities in the PCB and package caused by vias and connectors introduce significant signal loss and distortions at higher frequencies. Even with an ideal channel, at every package-die interface, there is an intrinsic parasitic capacitance due to the pads and the ESD circuit amounting to at least 150fF, and a 50Ω resistor termination at both the transmit and receive ends resulting in an intrinsic pole at 23GHz or lower. In light of all these limitations, serial NRZ signaling beyond 60Gb/s appears suboptimal in terms of both power and performance. Utilizing various modulation techniques such as PAM4, one can achieve a higher spectral efficiency [2]. To enable such transmission formats, high-speed moderate-resolution data converters are required. This paper describes a 36Gb/s transmitter based on an 18GS/s 8b DAC implemented in 28nm CMOS, compliant to the new IEEE802.3bj standard for 100G Ethernet over backplane and copper cables [3].
An Energy-Efficient SAR ADC With Event-Triggered Error Correction This brief presents an energy-efficient fully differential 10-bit successive approximation register (SAR) analog-to-digital converter (ADC) with a resolution of 10 bits and a sampling rate of 320 kS/s. The optimal capacitor split and bypass number is analyzed to achieve the highest switching energy efficiency. The common-mode voltage level remains constant during the MSB-capacitor switching cycles. To minimize nonlinearity due to charge averaging voltage offset or DAC array mismatch, an event-triggered error correction method is employed as a redundant cycle for detecting digital code errors within 1 least significant bit (LSB). A test chip was fabricated using the 180-nm CMOS process and occupied a 0.0564-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> core area. Under a regular 0.65-V supply voltage, the ADC achieved an effective number of bits of 9.61 bits and a figure of merit (FOM) of 6.38 fJ/conversion step, with 1.6- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${ \mu }\text{W}$ </tex-math></inline-formula> power dissipation for a low-frequency input. The measured differential and integral nonlinearity results are within 0.30 LSB and 0.43 LSB, respectively.
1.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
0
A 40-Gb/s PAM-4 Transmitter Based on a Ring-Resonator Optical DAC in 45-nm SOI CMOS. The next generations of large-scale data-centers and supercomputers demand optical interconnects to migrate to 400G and beyond. Microring modulators in silicon-photonics VLSI chips are promising devices to meet this demand due to their energy efficiency and compatibility with dense wavelength division multiplexed chip-to-chip optical I/O. Higher order pulse amplitude modulation (PAM) schemes can b...
A 64-Gb/s 4-PAM Transceiver Utilizing an Adaptive Threshold ADC in 16-nm FinFET. A 64-Gb/s 4-pulse-amplitude modulation (PAM) transceiver fabricated with a 16-nm fin field effect transistor (FinFET) technology is presented with a power consumption that scales with link loss. The transmitter (TX) includes a three-tap feed-forward equalizer (FFE) (one pre and one post) achieving a level separation mismatch ratio (RLM) of 99% and a random jitter (RJ) of 162-fs rms. The maximum swing is 1.1 V <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">ppd</sub> at a power consumption of 89.7 mW including clock distribution from a 1.2-V supply, corresponding to 1.39 pJ/bit. The receiver analog front end (RX-AFE) consists of a half-rate (HR) sampling continuous-time linear equalizer (CTLE) and 6-bit flash (1-bit folding) analog-to-digital converter (ADC) capable of non-uniform quantization. The non-uniform thresholds are selected based on a greedy search approach which allows the RX to reduce power at low channel loss in a highly granular manner and achieves better bit error rate (BER) than a uniform quantizer. For a channel with −8.6-dB loss at Nyquist, ADC can be configured in 2-bit mode, achieving BER < 1e – 6 at an RX-AFE power consumption of 100 mW. For a −29.5-dB loss channel, the RX-AFE consumes 283.9 mW and achieves a BER < 1e – 4 in conjunction with a software digital equalizer. For a −13.5-dB loss channel, a greedy search is used to optimize the quantization threshold levels, achieving an order of magnitude improvement in BER compared to uniform quantization.
A 60-Gb/s PAM4 Wireline Receiver With 2-Tap Direct Decision Feedback Equalization Employing Track-and-Regenerate Slicers in 28-nm CMOS This article describes a 4-level pulse amplitude modulation (PAM4) receiver incorporating continuous time linear equalizers (CTLEs) and a 2-tap direct decision feedback equalizer (DFE) for applications in wireline communication. A CMOS track-and-regenerate slicer is proposed and employed in the PAM4 receiver. The proposed slicer is designed for the purposes of improving the clock-to-Q delay as well as the output signal swing. A direct DFE in a PAM4 receiver is made possible with the proposed slicer by having rail-to-rail digital feedback signals available with reduced delay, and accordingly relaxing the settling time constraint of the summer. With the 2-tap direct DFE enabled by the proposed slicer, loop-unrolling and inductor-based bandwidth enhancement techniques, which can be area/power intensive, are not necessary at high data rates. The PAM4 receiver fabricated in 28-nm CMOS technology achieves bit-error-rate (BER) better than 1E-12, and energy efficiency of 1.1 pJ/b at 60 Gb/s, measured over a channel with 8.2-dB loss at Nyquist.
A Modelling and Nonlinear Equalization Technique for a 20 Gb/s 0.77 pJ/b VCSEL Transmitter in 32 nm SOI CMOS. This paper describes an ultralow-power VCSEL transmitter in 32 nm SOI CMOS. To increase its power efficiency, the VCSEL is driven at a low bias current. Driving the VCSEL in this condition increases its inherent nonlinearity. Conventional pre-emphasis techniques cannot compensate for this effect because they have a linear response. To overcome this limitation, a nonlinear equalization scheme is pr...
A 64 Gb/s Low-Power Transceiver for Short-Reach PAM-4 Electrical Links in 28-nm FDSOI CMOS A four-level pulse-amplitude modulation (PAM-4) transceiver operating up to 64 Gb/s in 28-nm CMOS fully depleted silicon-on-insulator (FDSOI) for short-reach electrical links is presented. The receiver equalization relies on a flexible continuous-time linear equalizer (CTLE), providing a very accurate channel inversion through a transfer function that can be optimally adapted at low frequency, mid-frequency, and high frequency independently. The CTLE meets the performance requirements of CEI-56G-VSR without requiring the decision feedback equalizer (DFE) implementation. As a result, timing constraints for comparators in data and edge sampling paths may be relaxed by using track-and-hold (T&H) stages, saving power consumption. At the maximum speed, the receiver draws 180 mA from 1-V supply, corresponding to 2.8 mW/Gb/s only. The transmitter embeds a flexible feed-forward equalizer (FFE) which can be reconfigured to comply with legacy standards. A comparison between current-mode (CM) and voltage-mode (VM) TX drivers is proposed, proving through experiments that the latter yields larger PAM-4 eye openings, thanks to the intrinsically higher speed. The full transceiver (TX, RX, and clock generation) operates from 16 to 64 Gb/s in PAM-4 and 8 to 32 Gb/s in non-return-to-zero (NRZ), and supports 2 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\times $ </tex-math></inline-formula> and 4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\times $ </tex-math></inline-formula> oversampling to reduce data rate down to 2 Gb/s. A TX-to-RX link at 64 Gb/s, across a 16.8-dB-loss channel, reaches 10 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">−12</sup> minimum bit-error rate (BER) and 0.19-UI horizontal eye opening at BER = 10 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">−6</sup> , with 5.02 mW/Gb/s power dissipation.
Chord: a scalable peer-to-peer lookup protocol for internet applications A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.
Computing size-independent matrix problems on systolic array processors A methodology to transform dense to band matrices is presented in this paper. This transformation, is accomplished by triangular blocks partitioning, and allows the implementation of solutions to problems with any given size, by means of contraflow systolic arrays, originally proposed by H.T. Kung. Matrix-vector and matrix-matrix multiplications are the operations considered here.The proposed transformations allow the optimal utilization of processing elements (PEs) of the systolic array when dense matrix are operated. Every computation is made inside the array by using adequate feedback. The feedback delay time depends only on the systolic array size.
A 12 bit 2.9 GS/s DAC With IM3 $ ≪ -$ 60 dBc Beyond 1 GHz in 65 nm CMOS A 12 bit 2.9 GS/s current-steering DAC implemented in 65 nm CMOS is presented, with an IM3 < ¿-60 dBc beyond 1 GHz while driving a 50 ¿ load with an output swing of 2.5 Vppd and dissipating a power of 188 mW. The SFDR measured at 2.9  GS/s is better than 60 dB beyond 340 MHz while the SFDR measured at 1.6 GS/s is better than 60 dB beyond 440 MHz. The increase in performance at high-frequencies, co...
The M-Machine multicomputer The M-Machine is an experimental multicomputer being developed to test architectural concepts motivated by the constraints of modern semiconductor technology and the demands of programming systems. The M-Machine computing nodes are connected with a 3-D mesh network; each node is a multithreaded processor incorporating 9 function units, on-chip cache, and local memory. The multiple function units are used to exploit both instruction-level and thread-level parallelism. A user accessible message passing system yields fast communication and synchronization between nodes. Rapid access to remote memory is provided transparently to the user with a combination of hardware and software mechanisms. This paper presents the architecture of the M-Machine and describes how its mechanisms attempt to maximize both single thread performance and overall system throughput. The architecture is complete and the MAP chip, which will serve as the M-Machine processing node, is currently being implemented.
SPONGENT: a lightweight hash function This paper proposes spongent - a family of lightweight hash functions with hash sizes of 88 (for preimage resistance only), 128, 160, 224, and 256 bits based on a sponge construction instantiated with a present-type permutation, following the hermetic sponge strategy. Its smallest implementations in ASIC require 738, 1060, 1329, 1728, and 1950 GE, respectively. To our best knowledge, at all security levels attained, it is the hash function with the smallest footprint in hardware published so far, the parameter being highly technology dependent. spongent offers a lot of flexibility in terms of serialization degree and speed. We explore some of its numerous implementation trade-offs. We furthermore present a security analysis of spongent. Basing the design on a present-type primitive provides confidence in its security with respect to the most important attacks. Several dedicated attack approaches are also investigated.
MicroGP—An Evolutionary Assembly Program Generator This paper describes 驴GP, an evolutionary approach for generating assembly programs tuned for a specific microprocessor. The approach is based on three clearly separated blocks: an evolutionary core, an instruction library and an external evaluator. The evolutionary core conducts adaptive population-based search. The instruction library is used to map individuals to valid assembly language programs. The external evaluator simulates the assembly program, providing the necessary feedback to the evolutionary core. 驴GP has some distinctive features that allow its use in specific contexts. This paper focuses on one such context: test program generation for design validation of microprocessors. Reported results show 驴GP being used to validate a complex 5-stage pipelined microprocessor. Its induced test programs outperform an exhaustive functional test and an instruction randomizer, showing that engineers are able to automatically obtain high-quality test programs.
The accelerator store: A shared memory framework for accelerator-based systems This paper presents the many-accelerator architecture, a design approach combining the scalability of homogeneous multi-core architectures and system-on-chip's high performance and power-efficient hardware accelerators. In preparation for systems containing tens or hundreds of accelerators, we characterize a diverse pool of accelerators and find each contains significant amounts of SRAM memory (up to 90&percnt; of their area). We take advantage of this discovery and introduce the accelerator store, a scalable architectural component to minimize accelerator area by sharing its memories between accelerators. We evaluate the accelerator store for two applications and find significant system area reductions (30&percnt;) in exchange for small overheads (2&percnt; performance, 0&percnt;--8&percnt; energy). The paper also identifies new research directions enabled by the accelerator store and the many-accelerator architecture.
Variable Off-Time Control Loop for Current-Mode Floating Buck Converters in LED Driving Applications A versatile controller architecture, used in current-mode floating buck converters for LED driving, is developed. State-of-the-art controllers rely on a fixed switching period and variable duty cycle, focusing on current averaging circuits. Instead, the proposed controller architecture is based on fixed peak current and adaptable off time as the average current control method. The control loop is comprised of an averaging block, transconductance amplifier, and an innovative time modulator. This modulator is intended to provide constant control loop response regardless of input voltage, current storage inductor, and number of LEDs in order to improve converter applicability for LED drivers. Fabricated in a 5 V standard 0.5 μm CMOS technology, the prototype controller is implemented and tested in a current-mode floating buck converter. The converter exhibits sound continuous conduction mode (CCM) operation for input voltages between 11 and 20 V, and a wide inductor range of 100-1000 μH. In all instances, the measured average LED current variation was lower than 10% of the desired value. A maximum conversion efficiency of 91% is obtained when driving 50 mA through four LEDs (with 14 V input voltage and an inductor of 470 μH). A stable CCM converter operation is also proven by simulation for nine LEDs and 45 V input voltage.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.11
0.1
0.1
0.05
0.033333
0
0
0
0
0
0
0
0
0
A Comprehensive Survey of Hardware-assisted Security: from the Edge to the Cloud Sensitive data processing occurs more and more on machines or devices out of users control. In the Internet of Things world, for example, the security of data could be posed at risk regardless the adopted deployment is oriented on Cloud or Edge Computing. In these systems different categories of attacks—such as physical bus sniffing, cold boot, cache side-channel, buffer overflow, code-reuse, or Iago—can be realized. Software-based countermeasures have been proposed. However, the severity and complexity of these attacks require a level of security that only the hardware support can ensure. In the last years, major companies released a number of architectural extensions aiming at provide hardware-assisted security to software. In this paper, we realize a comprehensive survey of HW-assisted technological solutions produced by vendors like Intel, AMD, and ARM for both embedded edge-devices and hosting machines such as cloud servers. The different approaches are classified based on the type of attacks prevented and the enforced techniques. An analysis on their mechanisms, issues, and market adoption is provided to support investigations of researchers approaching to this field of systems security.
WHISK: an uncore architecture for dynamic information flow tracking in heterogeneous embedded SoCs In this paper, we describe for the first time, how Dynamic Information Flow Tracking (DIFT) can be implemented for heterogeneous designs that contain one or more on-chip accelerators attached to a network-on-chip. We observe that implementing DIFT for such systems requires holistic platform level view, i.e., designing individual components in the heterogeneous system to be capable of supporting DIFT is necessary but not sufficient to correctly implement full-system DIFT. Based on this observation we present a new system architecture for implementing DIFT, and also describe wrappers that provide DIFT functionality for third-party IP components. Results show that our implementation minimally impacts performance of programs that do not utilize DIFT, and the price of security is constant for modest amounts of tagging and then sub-linearly increases with the amount of tagging.
SHIELD: a software hardware design methodology for security and reliability of MPSoCs Security of MPSoCs is an emerging area of concern in embedded systems. Security is jeopardized by code injection attacks, which are the most common types of software attacks. Previous attempts to detect code injection in MPSoCs have been burdened with significant performance overheads. In this work, we present a hardware/software methodology "SHIELD" to detect code injection attacks in MPSoCs. SHIELD instruments the software programs running on application processors in the MPSoC and also extracts control flow and basic block execution time information for runtime checking. We employ a dedicated security processor (monitor processor) to supervise the application processors on the MPSoC. Custom hardware is designed and used in the monitor and application processors. The monitor processor uses the custom hardware to rapidly analyze information communicated to it from the application processors at runtime. We have implemented SHIELD on a commercial extensible processor (Xtensa LX2) and tested it on a multiprocessor JPEG encoder program. In addition to code injection attacks, the system is also able to detect 83% of bit flips errors in the control flow instructions. The experiments show that SHIELD produces systems with runtime which is at least 9 times faster than the previous solution. SHIELD incurs a runtime (clock cycles) performance overhead of only 6.6% and an area overhead of 26.9%, when compared to a non-secure system.
A security-aware routing implementation for dynamic data protection in zone-based MPSoC. This work proposes a secure Network-on-Chip (NoC) approach, which enforces the encapsulation of sensitive traffic inside the asymmetrical security zones while using minimal and non-minimal paths. The NoC routing guarantees that the sensitive traffic communicates only through trusted nodes, which belong to a security zone. As the shape of the zones may change during operation, the sensitive traffic must be routed through low-risk paths. The experimental results show that this proposal can be an efficient and scalable alternative for enforcing the data protection inside a Multi-Processor System-on-Chip (MPSoC).
TaintHLS: High-Level Synthesis For Dynamic Information Flow Tracking Dynamic information flow tracking (DIFT) is a technique to track potential security vulnerabilities in software and hardware systems at run time. Untrusted data are marked with tags (tainted), which are propagated through the system and their potential for unsafe use is analyzed to prevent them. DIFT is not supported in heterogeneous systems especially hardware accelerators. Currently, DIFT is manually generated and integrated into the accelerators. This process is error-prone, potentially hurting the process of identifying security violations in heterogeneous systems. We present TaintHLS, to automatically generate a micro-architecture to support baseline operations and a shadow microarchitecture for intrinsic DIFT support in hardware accelerators while providing variable granularity of taint tags. TaintHLS offers a companion high-level synthesis (HLS) methodology to automatically generate such DIFT-enabled accelerators from a high-level specification. We extended a state-of-the-art HLS tool to generate DIFT-enhanced accelerators and demonstrated the approach on numerous benchmarks. The DIFT-enabled accelerators have negligible performance and no more than 30% hardware overhead.
SPECS: A Lightweight Runtime Mechanism for Protecting Software from Security-Critical Processor Bugs Processor implementation errata remain a problem, and worse, a subset of these bugs are security-critical. We classified 7 years of errata from recent commercial processors to understand the magnitude and severity of this problem, and found that of 301 errata analyzed, 28 are security-critical. We propose the SECURITY-CRITICAL PROCESSOR ER- RATA CATCHING SYSTEM (SPECS) as a low-overhead solution to this problem. SPECS employs a dynamic verification strategy that is made lightweight by limiting protection to only security-critical processor state. As a proof-of- concept, we implement a hardware prototype of SPECS in an open source processor. Using this prototype, we evaluate SPECS against a set of 14 bugs inspired by the types of security-critical errata we discovered in the classification phase. The evaluation shows that SPECS is 86% effective as a defense when deployed using only ISA-level state; incurs less than 5% area and power overhead; and has no software run-time overhead.
Unreliable failure detectors for reliable distributed systems We introduce the concept of unreliable failure detectors and study how they can be used to solve Consensus in asynchronous systems with crash failures. We characterise unreliable failure detectors in terms of two properties—completeness and accuracy. We show that Consensus can be solved even with unreliable failure detectors that make an infinite number of mistakes, and determine which ones can be used to solve Consensus despite any number of crashes, and which ones require a majority of correct processes. We prove that Consensus and Atomic Broadcast are reducible to each other in asynchronous systems with crash failures; thus, the above results also apply to Atomic Broadcast. A companion paper shows that one of the failure detectors introduced here is the weakest failure detector for solving Consensus [Chandra et al. 1992].
Tensor Decompositions and Applications This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or $N$-way array. Decompositions of higher-order tensors (i.e., $N$-way arrays with $N \geq 3$) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors.
Random Forests Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R. Schapire, Machine Learning: Proceedings of the Thirteenth International conference, &ast;&ast;&ast;, 148–156), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression.
Fundamental control algorithms in mobile networks In this work we propose simple and efficient protocols forcounting and leader election in mobile networks. For mobilenetworks with fixed base stations we provide a new andvery efficient protocol for counting the number of mobilehosts. The main part of the work concentrates on ad-hocnetworks (no fixed subnetwork). We provide a model forthese networks and leader election (and a special form ofcounting) protocols for both named and anonymous mobilehosts. In this work we define two protocol ...
QsCores: trading dark silicon for scalable energy efficiency with quasi-specific cores Transistor density continues to increase exponentially, but power dissipation per transistor is improving only slightly with each generation of Moore's law. Given the constant chip-level power budgets, this exponentially decreases the percentage of transistors that can switch at full frequency with each technology generation. Hence, while the transistor budget continues to increase exponentially, the power budget has become the dominant limiting factor in processor design. In this regime, utilizing transistors to design specialized cores that optimize energy-per-computation becomes an effective approach to improve system performance. To trade transistors for energy efficiency in a scalable manner, we propose Quasi-specific Cores, or QsCores, specialized processors capable of executing multiple general-purpose computations while providing an order of magnitude more energy efficiency than a general-purpose processor. The QsCores design flow is based on the insight that similar code patterns exist within and across applications. Our approach exploits these similar code patterns to ensure that a small set of specialized cores support a large number of commonly used computations. We evaluate QsCores's ability to target both a single application library (e.g., data structures) as well as a diverse workload consisting of applications selected from different domains (e.g., SPECINT, EEMBC, and Vision). Our results show that QsCores can provide 18.4 x better energy efficiency than general-purpose processors while reducing the amount of specialized logic required to support the workload by up to 66%.
Causality, influence, and computation in possibly disconnected synchronous dynamic networks In this work, we study the propagation of influence and computation in dynamic distributed computing systems that are possibly disconnected at every instant. We focus on a synchronous message-passing communication model with broadcast and bidirectional links. Our network dynamicity assumption is a worst-case dynamicity controlled by an adversary scheduler, which has received much attention recently. We replace the usual (in worst-case dynamic networks) assumption that the network is connected at every instant by minimal temporal connectivity conditions. Our conditions only require that another causal influence occurs within every time window of some given length. Based on this basic idea, we define several novel metrics for capturing the speed of information spreading in a dynamic network. We present several results that correlate these metrics. Moreover, we investigate termination criteria in networks in which an upper bound on any of these metrics is known. We exploit our termination criteria to provide efficient (and optimal in some cases) protocols that solve the fundamental counting and all-to-all token dissemination (or gossip) problems.
Current-mode adaptively hysteretic control for buck converters with fast transient response and improved output regulation This paper presents a current-mode adaptively hysteretic control (CMAHC) technique to achieve the fast transient response for DC-DC buck converters. A complementary full range current sensor comprising of chargingpath and discharging-path sensing transistors is proposed to track the inductor current seamlessly. With the proposed current-mode adaptively hysteretic topology, the inductor current is continuously monitored, and the adaptively hysteretic threshold is dynamically adjusted according to the feedback information comes from the output voltage level. Therefore, a fast load-transient response can be achieved. Besides, the output regulation performance is also improved by the proposed dynamic current-scaling circuitry (DCSC). Moreover, the proposed CMAHC topology can be used in a nearly zero RESR design configuration. The prototype fabricated using TSMC 0.25μm CMOS process occupies the area of 1.78mm2 including all bonding pads. Experimental results show that the output voltage ripple is smaller than 30mV over a wide loading current from 0 mA to 500 mA with maximum power conversion efficiency higher than 90%. The recovery time from light to heavy load (100 to 500 mA) is smaller than 5μs.
A 1V 3.5 μW Bio-AFE With Chopper-Capacitor-Chopper Integrator-Based DSL and Low Power GM-C Filter This brief presents a low-noise, low-power bio-signal acquisition analog front-end (Bio-AFE). It mainly includes a capacitively coupled chopper-stabilized instrumentation amplifier (CCIA), a programmable gain amplifier (PGA), a low-pass filter (LPF), and a successive approximation analog to digital converter (SAR ADC). A chopper-capacitor-chopper integrator based DC servo loop (C3IB-DSL...
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
MIMO Broadcasting for Simultaneous Wireless Information and Power Transfer Wireless power transfer (WPT) is a promising new solution to provide convenient and perpetual energy supplies to wireless networks. In practice, WPT is implementable by various technologies such as inductive coupling, magnetic resonate coupling, and electromagnetic (EM) radiation, for short-/mid-/long-range applications, respectively. In this paper, we consider the EM or radio signal enabled WPT in particular. Since radio signals can carry energy as well as information at the same time, a unified study on simultaneous wireless information and power transfer (SWIPT) is pursued. Specifically, this paper studies a multiple-input multiple-output (MIMO) wireless broadcast system consisting of three nodes, where one receiver harvests energy and another receiver decodes information separately from the signals sent by a common transmitter, and all the transmitter and receivers may be equipped with multiple antennas. Two scenarios are examined, in which the information receiver and energy receiver are separated and see different MIMO channels from the transmitter, or co-located and see the identical MIMO channel from the transmitter. For the case of separated receivers, we derive the optimal transmission strategy to achieve different tradeoffs for maximal information rate versus energy transfer, which are characterized by the boundary of a so-called rate-energy (R-E) region. For the case of co-located receivers, we show an outer bound for the achievable R-E region due to the potential limitation that practical energy harvesting receivers are not yet able to decode information directly. Under this constraint, we investigate two practical designs for the co-located receiver case, namely time switching and power splitting, and characterize their achievable R-E regions in comparison to the outer bound.
Low-Power Far-Field Wireless Powering for Wireless Sensors This paper discusses far-field wireless powering for low-power wireless sensors, with applications to sensing in environments where it is difficult or impossible to change batteries and where the exact position of the sensors might not be known. With expected radio-frequency (RF) power densities in the 20-200- μW/cm2 range, and desired small sensor overall size, low-power nondirective wireless powering is appropriate for sensors that transmit data at low duty cycles. The sensor platform is powered through an antenna which receives incident electromagnetic waves in the gigahertz frequency range, couples the energy to a rectifier circuit which charges a storage device (e.g., thin-film battery) through an efficient power management circuit, and the entire platform, including sensors and a low-power wireless transmitter, and is controlled through a low-power microcontroller. For low incident power density levels, codesign of the RF powering and the power management circuits is required for optimal performance. Results for hybrid and monolithic implementations of the power management circuitry are presented with integrated antenna rectifiers operating in the 1.96-GHz cellular and in 2.4-GHz industrial-scientific-medical (ISM) bands.
High-Efficiency Millimeter-Wave Energy-Harvesting Systems With Milliwatt-Level Output Power. The output power level and the power conversion efficiency (PCE) rate of the energy-harvesting systems are vital factors in realizing effective millimeter-wave wireless power transfer solutions that can power battery-less and charge coil-free smart everyday objects. Two 60-GHz energy harvesters that use a tuned complementary cross-coupled oscillator-like rectifying circuitry in 40-nm digital CMOS ...
Physical-Layer Security Analysis of Mixed SIMO SWIPT RF and FSO Fixed-Gain Relaying Systems This paper studies the physical-layer security problem for mixed single-input multiple-output (SIMO) simultaneous wireless information and power transfer (SWIPT) based radio frequency (RF) and free-space optical (FSO) communication systems. The FSO link experiences Málaga turbulence and each RF link suffers from Nakagami- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$m$</tex-math></inline-formula> fading and path loss. We consider one energy harvesting receiver in our system model that may act as a potential eavesdropper. More precisely, to investigate the secrecy performance of considered mixed SIMO SWIPT based RF and FSO communication system, we derive closed-form expressions for the average secrecy capacity and the lower bound of secrecy outage probability by considering the fixed-gain relaying scheme, the multiple-antenna technique, the energy harvesting, the intensity modulation with direct detection, and the heterodyne detection techniques in the presence of pointing error.
EPS-TRA: Energy Efficient Peer Selection and Time Switching Ratio Allocation for SWIPT-Enabled D2D Communication This paper considers device-to-device (D2D) network with Simultaneous Wireless Information and Power Transfer (SWIPT) enabled devices to ensure self-sustained communication in situations like disasters. Such direct link networks can ensure connectivity with devices having drained back-up, when trapped in collapsed infrastructure, through mutual sharing of energy on RF link. To guarantee successful execution of SWIPT session for an isolated device in wake of disasters, it is pertinent to select a reliable peer with ultimate aim to maximize link Energy Efficiency (EE). In practice, Energy Harvesting (EH) is not achievable after Information Decoding (ID); however, it has been made possible through splitting the signal in the time domain. Selection of D2D peer for self-sustained communication with an objective to maximize EE through optimum time based splitting of signal has not been extensively studied. In this paper to manifest the aforesaid goal, we worked out a joint problem of peer association and time switching ratio allocation with an objective to maximize the EE for a device contained under collapsed infrastructure. We propose an Energy efficient Peer Selection and Time switching Ratio Allocation (EPS-TRA) algorithm to solve the proposed mixed integer problem. Numerical results validate our proposed approach in acquiring better EE when compared with Uniform Allocation Scheme of time slots for EH & ID. Furthermore, results explain how EE of the link varies with the choice of constrained variables i.e., data rate and harvested energy.
Impossibility of distributed consensus with one faulty process The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the “Byzantine Generals” problem.
Ad-hoc On-Demand Distance Vector Routing This paper describes work carried out as part of the GUIDE project at Lancaster University. The overall aim of the project is to develop a context-sensitive tourist guide for visitors to the city of Lancaster. Visitors are equipped with portable GUIDE ...
A Linear Representation of Dynamics of Boolean Networks A new matrix product, called semi-tensor product of matrices, is reviewed. Using it, a matrix expression of logic is proposed, where a logical variable is expressed as a vector, a logical function is expressed as a multiple linear mapping. Under this framework, a Boolean network equation is converted into an equivalent algebraic form as a conventional discrete-time linear system. Analyzing the transition matrix of the linear system, formulas are obtained to show a) the number of fixed points; b) the numbers of cycles of different lengths; c) transient period, for all points to enter the set of attractors; and d) basin of each attractor. The corresponding algorithms are developed and used to some examples.
The geometry of innocent flesh on the bone: return-into-libc without function calls (on the x86) We present new techniques that allow a return-into-libc attack to be mounted on x86 executables that calls no functions at all. Our attack combines a large number of short instruction sequences to build gadgets that allow arbitrary computation. We show how to discover such instruction sequences by means of static analysis. We make use, in an essential way, of the properties of the x86 instruction set.
Beyond Stack Smashing: Recent Advances in Exploiting Buffer Overruns This article describes three powerful general-purpose families of exploits for buffer overruns: arc injection, pointer subterfuge, and heap smashing. These new techniques go beyond the traditional "stack smashing" attack and invalidate traditional assumptions about buffer overruns.
Fuzzy regulators and fuzzy observers: relaxed stability conditions and LMI-based designs This paper presents new relaxed stability conditions and LMI- (linear matrix inequality) based designs for both continuous and discrete fuzzy control systems. They are applied to design problems of fuzzy regulators and fuzzy observers. First, Takagi and Sugeno's fuzzy models and some stability results are recalled. To design fuzzy regulators and fuzzy observers, nonlinear systems are represented by Takagi-Sugeno's (TS) fuzzy models. The concept of parallel distributed compensation is employed to design fuzzy regulators and fuzzy observers from the TS fuzzy models. New stability conditions are obtained by relaxing the stability conditions derived in previous papers, LMI-based design procedures for fuzzy regulators and fuzzy observers are constructed using the parallel distributed compensation and the relaxed stability conditions. Other LMI's with respect to decay rate and constraints on control input and output are also derived and utilized in the design procedures. Design examples for nonlinear systems demonstrate the utility of the relaxed stability conditions and the LMI-based design procedures
Recurrent-Fuzzy-Neural-Network-Controlled Linear Induction Motor Servo Drive Using Genetic Algorithms A recurrent fuzzy neural network (RFNN) controller based on real-time genetic algorithms (GAs) is developed for a linear induction motor (LIM) servo drive in this paper. First, the dynamic model of an indirect field-oriented LIM servo drive is derived. Then, an online training RFNN with a backpropagation algorithm is introduced as the tracking controller. Moreover, to guarantee the global convergence of tracking error, a real-time GA is developed to search the optimal learning rates of the RFNN online. The GA-based RFNN control system is proposed to control the mover of the LIM for periodic motion. The theoretical analyses for the proposed GA-based RFNN controller are described in detail. Finally, simulated and experimental results show that the proposed controller provides high-performance dynamic characteristics and is robust with regard to plant parameter variations and external load disturbance
Variable Off-Time Control Loop for Current-Mode Floating Buck Converters in LED Driving Applications A versatile controller architecture, used in current-mode floating buck converters for LED driving, is developed. State-of-the-art controllers rely on a fixed switching period and variable duty cycle, focusing on current averaging circuits. Instead, the proposed controller architecture is based on fixed peak current and adaptable off time as the average current control method. The control loop is comprised of an averaging block, transconductance amplifier, and an innovative time modulator. This modulator is intended to provide constant control loop response regardless of input voltage, current storage inductor, and number of LEDs in order to improve converter applicability for LED drivers. Fabricated in a 5 V standard 0.5 μm CMOS technology, the prototype controller is implemented and tested in a current-mode floating buck converter. The converter exhibits sound continuous conduction mode (CCM) operation for input voltages between 11 and 20 V, and a wide inductor range of 100-1000 μH. In all instances, the measured average LED current variation was lower than 10% of the desired value. A maximum conversion efficiency of 91% is obtained when driving 50 mA through four LEDs (with 14 V input voltage and an inductor of 470 μH). A stable CCM converter operation is also proven by simulation for nine LEDs and 45 V input voltage.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
0
Model predictive control: theory and practice—a survey We refer to Model Predictive Control (MPC) as that family of controllers in which there is a direct use of an explicit and separately identifiable model. Control design methods based on the MPC concept have found wide acceptance in industrial applications and have been studied by academia. The reason for such popularity is the ability of MPC designs to yield high performance control systems capable of operating without expert intervention for long periods of time. In this paper the issues of importance that any control system should address are stated. MPC techniques are then reviewed in the light of these issues in order to point out their advantages in design and implementation. A number of design techniques emanating from MPC, namely Dynamic Matrix Control, Model Algorithmic Control, Inferential Control and Internal Model Control, are put in perspective with respect to each other and the relation to more traditional methods like Linear Quadratic Control is examined. The flexible constraint handling capabilities of MPC are shown to be a significant advantage in the context of the overall operating objectives of the process industries and the 1-, 2-, and ∞-norm formulations of the performance objective are discussed. The application of MPC to non-linear systems is examined and it is shown that its main attractions carry over. Finally, it is explained that though MPC is not inherently more or less robust than classical feedback, it can be adjusted more easily for robustness.
Contractive model predictive control for constrained nonlinear systems. This paper addresses the development of stabilizing state and output feedback model predictive control (MPC) algorithms for constrained continuous-time nonlinear systems with discrete observations. Moreover, we propose a nonlinear observer structure for this class of systems and derive sufficient conditions under which this observer provides asymptotically convergent estimates. The MPC scheme proposed consists of a basic finite horizon nonlinear MPC technique with the introduction of an additional state constraint, which has been called a contractive constraint. The resulting MPC scheme has been denoted contractive MPC. This is a Lyapunov-based approach in which a Lyapunov function chosen a priori is decreased, not continuously, but discretely; it is allowed to increase at other times. We show in this work that the implementation of this additional constraint into the online optimization makes it possible to prove strong nominal stability properties of the closed-loop system.
Quadratic programming with one negative eigenvalue is NP-hard We show that the problem of minimizing a concave quadratic function with one concave direction is NP-hard. This result can be interpreted as an attempt to understand exactly what makes nonconvex quadratic programming problems hard. Sahni in 1974 [8] showed that quadratic programming with a negative definite quadratic term (n negative eigenvalues) is NP-hard, whereas Kozlov, Tarasov and Hacijan [2] showed in 1979 that the ellipsoid algorithm solves the convex quadratic problem (no negative eigenvalues) in polynomial time. This report shows that even one negative eigenvalue makes the problem NP-hard.
Energy Management Strategies for Vehicular Electric Power Systems In the near future, a significant increase in electric power consumption in vehicles is expected. To limit the associated increase in fuel consumption and exhaust emissions, smart strategies for the generation, storage/retrieval, distribution, and consumption of electric power will be used. Inspired by the research on energy management for hybrid electric vehicles (HEVs), this paper presents an ex...
Uncertain Nonlinear Receding Horizon Control Systems Subject To Non-Zero Computation Time In this paper Receding Horizon Control (RHC) of an uncertain nonlinear system is considered where the computation time is non-negligible. In a well-known method, the solution process of the optimal control problem is started one sampling period in advance by using the prediction of the initial conditions, thus giving the controller a reasonable deadline to complete the optimization process. The current work suggests the use of the theory of neighboring extremal paths to improve the performance of the existing method by adding a correction phase to the previous method and therefore recovering the exact solution in the presence of prediction errors. An immediate result would be that the properties of the RHC techniques involving zero computation time would be valid for practical systems in the actual implementation, where a zero computation time is unachievable. The new approach is applied for the control of a mobile robot system which demonstrates significant performance improvements over the existing method.
A survey of state and disturbance observers for practitioners This paper gives a unified and historical review of observer design for the benefit of practitioners. It is unified in the sense that all observers are examined in terms of: 1) the assumed dynamic structure of the plant; 2) the required information, including the input signals and modeling information of the plant; and 3) the implementation equation of the observer. This allows a practitioner, with a particular observer design problem in mind, to quickly find a suitable solution. The review is historical in the sense that it follows the evolution of ideas in observer design in the last half century. From the distinction in problem formulation, required modeling information and the observer design goal, we can see two schools of thought: one is developed in the framework of modern control theory; the other is based on disturbance estimation, which has been, to some extent, overlooked
An Optimal Control Approach to the Multi-Agent Persistent Monitoring Problem in Two-Dimensional Spaces We address the persistent monitoring problem in two-dimensional mission spaces where the objective is to control the trajectories of multiple cooperating agents to minimize an uncertainty metric. In a one-dimensional mission space, we have shown that the optimal solution is for each agent to move at maximal speed and switch direction at specific points, possibly waiting some time at each such point before switching. In a two-dimensional mission space, such simple solutions can no longer be derived. An alternative is to optimally assign each agent a linear trajectory, motivated by the one-dimensional analysis. We prove, however, that elliptical trajectories outperform linear ones. With this motivation, we formulate a parametric optimization problem in which we seek to determine such trajectories. We show that the problem can be solved using Infinitesimal Perturbation Analysis (IPA) to obtain performance gradients on line and obtain a complete and scalable solution. Since the solutions obtained are generally locally optimal, we incorporate a stochastic comparison algorithm for deriving globally optimal elliptical trajectories. Numerical examples are included to illustrate the main result, allow for uncertainties modeled as stochastic processes, and compare our proposed scalable approach to trajectories obtained through off-line computationally intensive solutions.
Threaded code The concept of “threaded code” is presented as an alternative to machine language code. Hardware and software realizations of it are given. In software it is realized as interpretive code not needing an interpreter. Extensions and optimizations are mentioned.
Building efficient wireless sensor networks with low-level naming In most distributed systems, naming of nodes for low-level communication leverages topological location (such as node addresses) and is independent of any application. In this paper, we investigate an emerging class of distributed systems where low-level communication does not rely on network topological location. Rather, low-level communication is based on attributes that are external to the network topology and relevant to the application. When combined with dense deployment of nodes, this kind of named data enables in-network processing for data aggregation, collaborative signal processing, and similar problems. These approaches are essential for emerging applications such as sensor networks where resources such as bandwidth and energy are limited. This paper is the first description of the software architecture that supports named data and in-network processing in an operational, multi-application sensor-network. We show that approaches such as in-network aggregation and nested queries can significantly affect network traffic. In one experiment aggregation reduces traffic by up to 42% and nested queries reduce loss rates by 30%. Although aggregation has been previously studied in simulation, this paper demonstrates nested queries as another form of in-network processing, and it presents the first evaluation of these approaches over an operational testbed.
PRESENT: An Ultra-Lightweight Block Cipher With the establishment of the AES the need for new block ciphers has been greatly diminished; for almost all block cipher applications the AES is an excellent and preferred choice. However, despite recent implementation advances, the AES is not suitable for extremely constrained environments such as RFID tags and sensor networks. In this paper we describe an ultra-lightweight block cipher, present. Both security and hardware efficiency have been equally important during the design of the cipher and at 1570 GE, the hardware requirements for presentare competitive with today's leading compact stream ciphers.
Enabling open-source cognitively-controlled collaboration among software-defined radio nodes Software-defined radios (SDRs) are now recognized as a key building block for future wireless communications. We have spent the past year enhancing existing open software to create a software-defined data radio. This radio extends the notion of software-defined behavior to higher layers in the protocol stack: most importantly through the media access layer. Our particular approach to the problem has been guided by the desire to allow fine-grained cognitive control of the radio. We describe our system, Adaptive Dynamic Radio Open-source Intelligent Team (ADROIT).
Distributed Primal-Dual Subgradient Method for Multiagent Optimization via Consensus Algorithms. This paper studies the problem of optimizing the sum of multiple agents' local convex objective functions, subject to global convex inequality constraints and a convex state constraint set over a network. Through characterizing the primal and dual optimal solutions as the saddle points of the Lagrangian function associated with the problem, we propose a distributed algorithm, named the distributed primal-dual subgradient method, to provide approximate saddle points of the Lagrangian function, based on the distributed average consensus algorithms. Under Slater's condition, we obtain bounds on the convergence properties of the proposed method for a constant step size. Simulation examples are provided to demonstrate the effectiveness of the proposed method.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.1162
0.1162
0.110333
0.110333
0.110333
0.036778
0.002933
0
0
0
0
0
0
0
Continuous-Time and Sampled-Data Stabilizers for Nonlinear Systems With Input and Measurement Delays In this paper, we propose continuous-time and sampled-data output feedback controllers for nonlinear multi-input multi-output systems with time-varying measurement and input delays, with no restriction on the bound or serious limitations on the growth of the nonlinearities. A state prediction is generated by chains of saturated high-gain observers with switching error-correction terms and the state prediction is used to stabilize the system with saturated controls. The observers reconstruct the unmeasurable states at different delayed time-instants, which partition the maximal variation interval of the time-varying delays. These delayed time instant depend both on the magnitude of the delays and the growth rate of the nonlinearities. We also design sampled-data stabilizers as zero-order discretization of a hybrid modification (with continuous-time states and discrete-time control and innovations) of the continuous-time stabilizers.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Real-Time Neuromorphic System for Large-Scale Conductance-Based Spiking Neural Networks. The investigation of the human intelligence, cognitive systems and functional complexity of human brain is significantly facilitated by high-performance computational platforms. In this paper, we present a real-time digital neuromorphic system for the simulation of large-scale conductance-based spiking neural networks (LaCSNN), which has the advantages of both high biological realism and large network scale. Using this system, a detailed large-scale cortico-basal ganglia-thalamocortical loop is simulated using a scalable 3-D network-on-chip (NoC) topology with six Altera Stratix III field-programmable gate arrays simulate 1 million neurons. Novel router architecture is presented to deal with the communication of multiple data flows in the multinuclei neural network, which has not been solved in previous NoC studies. At the single neuron level, cost-efficient conductance-based neuron models are proposed, resulting in the average utilization of 95% less memory resources and 100% less DSP resources for multiplier-less realization, which is the foundation of the large-scale realization. An analysis of the modified models is conducted, including investigation of bifurcation behaviors and ionic dynamics, demonstrating the required range of dynamics with a more reduced resource cost. The proposed LaCSNN system is shown to outperform the alternative state-of-the-art approaches previously used to implement the large-scale spiking neural network, and enables a broad range of potential applications due to its real-time computational power.
First-Spike-Based Visual Categorization Using Reward-Modulated STDP. Reinforcement learning (RL) has recently regained popularity with major achievements such as beating the European game of Go champion. Here, for the first time, we show that RL can be used efficiently to train a spiking neural network (SNN) to perform object recognition in natural images without using an external classifier. We used a feedforward convolutional SNN and a temporal coding scheme wher...
From few to many: illumination cone models for face recognition under variable lighting and pose We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render驴or synthesize驴images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone (based on Euclidean distance within the image space). We test our face recognition method on 4,050 images from the Yale Face Database B; these images contain 405 viewing conditions (9 poses $\times$ 45 illumination conditions) for 10 individuals. The method performs almost without error, except on the most extreme lighting directions, and significantly outperforms popular recognition methods that do not use a generative model.
A 1000 fps Vision Chip Based on a Dynamically Reconfigurable Hybrid Architecture Comprising a PE Array Processor and Self-Organizing Map Neural Network This paper proposes a vision chip hybrid architecture with dynamically reconfigurable processing element (PE) array processor and self-organizing map (SOM) neural network. It integrates a high speed CMOS image sensor, three von Neumann-type processors, and a non-von Neumann-type bio-inspired SOM neural network. The processors consist of a pixel-parallel PE array processor with O(N×N) parallelism, a row-parallel row-processor (RP) array processor with O(N) parallelism and a thread-parallel dual-core microprocessor unit (MPU) with O(2) parallelism. They execute low-, mid- and high-level image processing, respectively. The SOM network speeds up high-level processing in pattern recognition tasks by O(N/4×N/4), which improves the chip performance remarkably. The SOM network can be dynamically reconfigured from the PE array to largely save chip area. A prototype chip with a 256 × 256 image sensor, a reconfigurable 64 × 64 PE array processor/16 × 16 SOM network, a 64 × 1 RP array processor and a dual-core 32-bit MPU was implemented in a 0.18 μm CMOS image sensor process. The chip can perform image capture and various-level image processing at a high speed and in flexible fashion. Various complicated applications including M-S functional solution, horizon estimation, hand gesture recognition, face recognition are demonstrated at high speed from several hundreds to >1000 fps.
Efficient FPGA Implementations of Pair and Triplet-Based STDP for Neuromorphic Architectures Synaptic plasticity is envisioned to bring about learning and memory in the brain. Various plasticity rules have been proposed, among which spike-timing-dependent plasticity (STDP) has gained the highest interest across various neural disciplines, including neuromorphic engineering. Here, we propose highly efficient digital implementations of pair-based STDP (PSTDP) and triplet-based STDP (TSTDP) on field programmable gate arrays that do not require dedicated floating-point multipliers and hence need minimal hardware resources. The implementations are verified by using them to replicate a set of complex experimental data, including those from pair, triplet, quadruplet, frequency-dependent pairing, as well as Bienenstock–Cooper–Munro experiments. We demonstrate that the proposed TSTDP design has a higher operating frequency that leads to <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.46\times $ </tex-math></inline-formula> faster weight adaptation (learning) and achieves 11.55 folds improvement in resource usage, compared to a recent implementation of a calcium-based plasticity rule capable of exhibiting similar learning performance. In addition, we show that the proposed PSTDP and TSTDP designs, respectively, consume <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.38\times $ </tex-math></inline-formula> and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1.78\times $ </tex-math></inline-formula> less resources than the most efficient PSTDP implementation in the literature. As a direct result of the efficiency and powerful synaptic capabilities of the proposed learning modules, they could be integrated into large-scale digital neuromorphic architectures to enable high-performance STDP learning.
7.6 A 65nm 236.5nJ/Classification Neuromorphic Processor with 7.5% Energy Overhead On-Chip Learning Using Direct Spike-Only Feedback Advances in neural network and machine learning algorithms have sparked a wide array of research in specialized hardware, ranging from high-performance convolutional neural network (CNN) accelerators to energy-efficient deep-neural network (DNN) edge computing systems [1]. While most studies have focused on designing inference engines, recent works have shown that on-chip training could serve practical purposes such as compensating for process variations of in-memory computing [2] or adapting to changing environments in real time [3]. However, these successes were limited to relatively simple tasks mainly due to the large energy overhead of the training process. These problems arise primarily from the high-precision arithmetic and memory required for error propagation and weight updates, in contrast to error-tolerant inference operation; the capacity requirements of a learning system are significantly higher than those of an inference system [4].
Minitaur, an Event-Driven FPGA-Based Spiking Network Accelerator Current neural networks are accumulating accolades for their performance on a variety of real-world computational tasks including recognition, classification, regression, and prediction, yet there are few scalable architectures that have emerged to address the challenges posed by their computation. This paper introduces Minitaur, an event-driven neural network accelerator, which is designed for low power and high performance. As an field-programmable gate array-based system, it can be integrated into existing robotics or it can offload computationally expensive neural network tasks from the CPU. The version presented here implements a spiking deep network which achieves 19 million postsynaptic currents per second on 1.5 W of power and supports up to 65 K neurons per board. The system records 92% accuracy on the MNIST handwritten digit classification and 71% accuracy on the 20 newsgroups classification data set. Due to its event-driven nature, it allows for trading off between accuracy and latency.
Threaded code The concept of “threaded code” is presented as an alternative to machine language code. Hardware and software realizations of it are given. In software it is realized as interpretive code not needing an interpreter. Extensions and optimizations are mentioned.
TAG: a Tiny AGgregation service for ad-hoc sensor networks We present the Tiny AGgregation (TAG) service for aggregation in low-power, distributed, wireless environments. TAG allows users to express simple, declarative queries and have them distributed and executed efficiently in networks of low-power, wireless sensors. We discuss various generic properties of aggregates, and show how those properties affect the performance of our in network approach. We include a performance study demonstrating the advantages of our approach over traditional centralized, out-of-network methods, and discuss a variety of optimizations for improving the performance and fault tolerance of the basic solution.
A review of process fault detection and diagnosis: Part II: Qualitative models and search strategies In this part of the paper, we review qualitative model representations and search strategies used in fault diagnostic systems. Qualitative models are usually developed based on some fundamental understanding of the physics and chemistry of the process. Various forms of qualitative models such as causal models and abstraction hierarchies are discussed. The relative advantages and disadvantages of these representations are highlighted. In terms of search strategies, we broadly classify them as topographic and symptomatic search techniques. Topographic searches perform malfunction analysis using a template of normal operation, whereas, symptomatic searches look for symptoms to direct the search to the fault location. Various forms of topographic and symptomatic search strategies are discussed.
A comparative study of different FFT architectures for software defined radio Fast Fourier Transform (FFT) is the most basic and essential operation performed in Software Defined Radio (SDR). Thus designing regular, reconfigurable, modular, low hardware and timingcomplexity FFT computation block is very important. A single FFT block should be configurable for varying length FFT computation and also for computation of different transforms like Discrete cosine/sine transform (DCT/DST) etc. In this paper, the authors analyze area, timing complexity and noise to signal Ratio (NSR) of Bruun's FFT w.r.t. classical FFT from a SDR perspective. It is shown that architecture of Bruun's FFT is ideally suited for SDR and may be used in preference over classical FFT for most practical cases. A detailed comparison of Bruun's and classical FFT hardware architectures for same NSR is carried out and results of FPGA implementation are discussed.
Efficiency of a Regenerative Direct-Drive Electromagnetic Active Suspension. The efficiency and power consumption of a direct-drive electromagnetic active suspension system for automotive applications are investigated. A McPherson suspension system is considered, where the strut consists of a direct-drive brushless tubular permanent-magnet actuator in parallel with a passive spring and damper. This suspension system can both deliver active forces and regenerate power due to imposed movements. A linear quadratic regulator controller is developed for the improvement of comfort and handling (dynamic tire load). The power consumption is simulated as a function of the passive damping in the active suspension system. Finally, measurements are performed on a quarter-car test setup to validate the analysis and simulations.
SPECS: A Lightweight Runtime Mechanism for Protecting Software from Security-Critical Processor Bugs Processor implementation errata remain a problem, and worse, a subset of these bugs are security-critical. We classified 7 years of errata from recent commercial processors to understand the magnitude and severity of this problem, and found that of 301 errata analyzed, 28 are security-critical. We propose the SECURITY-CRITICAL PROCESSOR ER- RATA CATCHING SYSTEM (SPECS) as a low-overhead solution to this problem. SPECS employs a dynamic verification strategy that is made lightweight by limiting protection to only security-critical processor state. As a proof-of- concept, we implement a hardware prototype of SPECS in an open source processor. Using this prototype, we evaluate SPECS against a set of 14 bugs inspired by the types of security-critical errata we discovered in the classification phase. The evaluation shows that SPECS is 86% effective as a defense when deployed using only ISA-level state; incurs less than 5% area and power overhead; and has no software run-time overhead.
A 1V 3.5 μW Bio-AFE With Chopper-Capacitor-Chopper Integrator-Based DSL and Low Power GM-C Filter This brief presents a low-noise, low-power bio-signal acquisition analog front-end (Bio-AFE). It mainly includes a capacitively coupled chopper-stabilized instrumentation amplifier (CCIA), a programmable gain amplifier (PGA), a low-pass filter (LPF), and a successive approximation analog to digital converter (SAR ADC). A chopper-capacitor-chopper integrator based DC servo loop (C3IB-DSL...
1.105
0.105
0.1
0.1
0.1
0.055
0.0204
0
0
0
0
0
0
0
A Software-Defined Always-On System With 57–75-nW Wake-Up Function Using Asynchronous Clock-Free Pipelined Event-Driven Architecture and Time-Shielding Level-Crossing ADC This work presents an ultra-low-power software-defined always-on wake-up system to drastically decrease the system power of Internet of Things (IoTs) devices, which usually operate in random-sparse-event (RSE) scenarios. It mainly thanks to a clock-free time-shielding level-crossing ADC (TS-LCADC), software-defined clock-free multi-function detectors, and an asynchronous pipelined event-driven arc...
Asynchronous Adaptive Threshold Level Crossing ADC for Wearable ECG Sensors. The level crossing ADC generates digitized samples consisting of the magnitude of input signal and time interval between two consecutive level crossings when the input signal crosses the threshold level. This paper presents a new architecture of low power asynchronous adaptive threshold level crossing (LC) ADC suitable for wearable ECG sensors based on a novel algorithm for determining adaptive threshold. The adaptive threshold was determined by calculating the mean of maximum and minimum values of signal in a predetermined window. Polynomial interpolation was used to reconstruct the signal. A signal to noise distortion ratio of 57.50 dB and a mean square error (MSE) measure of 1.368*10 V was achieved by the proposed algorithm for a 1 mV, 10 Hz input sinusoidal signal in MATLAB. The asynchronous adaptive threshold LC ADC operating from a supply voltage of 0.8 V occupied a layout area of 266.33*331.385 μm when implemented in CADENCE virtuoso using 180 nm technology. The designed circuit consumes an average power of 367.6 nW for a 1mVpp, 10 Hz input sinusoidal signal when simulated in Virtuoso.
A 1-to-1-kHz, 4.2-to-544-nW, Multi-Level Comparator Based Level-Crossing ADC for IoT Applications. This brief presents the design of an ultra-low power level-crossing analog-to-digital converter (LC-ADC) for IoT and biomedical applications. The proposed LC-ADC utilizes only one multi-level comparator instead of multiple comparators as in conventional LC-ADC, leading to simplified implementation and significant reduction in power. Implemented in 0.18-μm CMOS process, the LC-ADC achieves 7.9 equi...
From Seizure Detection to Smart and Fully Embedded Seizure Prediction Engine: A Review Recent review papers have investigated seizure prediction, creating the possibility of preempting epileptic seizures. Correct seizure prediction can significantly improve the standard of living for the majority of epileptic patients, as the unpredictability of seizures is a major concern for them. Today, the development of algorithms, particularly in the field of machine learning, enables reliable and accurate seizure prediction using desktop computers. However, despite extensive research effort being devoted to developing seizure detection integrated circuits (ICs), dedicated seizure prediction ICs have not been developed yet. We believe that interdisciplinary study of system architecture, analog and digital ICs, and machine learning algorithms can promote the translation of scientific theory to a more realistic intelligent, integrated, and low-power system that can truly improve the standard of living for epileptic patients. This review explores topics ranging from signal acquisition analog circuits to classification algorithms and dedicated digital signal processing circuits for detection and prediction purposes, to provide a comprehensive and useful guideline for the construction, implementation and optimization of wearable and integrated smart seizure prediction systems.
A Flash-Based Non-Uniform Sampling ADC With Hybrid Quantization Enabling Digital Anti-Aliasing Filter. This paper introduces different classes of analog-to-digital converter (ADC) architecture that non-uniformly samples the analog input and shifts from conventional voltage quantization to a hybrid quantization paradigm wherein both voltage and time quantization are utilized. In this architecture, the sampling rate adapts to the input frequency, which maintains an alias-free spectrum and enables an ...
An Event-driven Clockless Level-Crossing ADC With Signal-Dependent Adaptive Resolution This paper presents a clock-less 8b ADC in 130 nm CMOS technology, which uses signal-dependent sampling rate and adaptive resolution through a time-varying comparison window, for applications with sparse input signals. Input-dependent dynamic bias is used to reduce comparator delay dispersion, thus helping to maintain SNDR while saving power. Alias-free operation with SNDR in the range of 47-54 dB, which partly exceeds the theoretical limit of 8b conventional converters, is achieved over a 20 kHz bandwidth with 3-9 μW power from a 0.8 V supply.
Tapestry: a resilient global-scale overlay for service deployment We present Tapestry, a peer-to-peer overlay routing infrastructure offering efficient, scalable, location-independent routing of messages directly to nearby copies of an object or service using only localized resources. Tapestry supports a generic decentralized object location and routing applications programming interface using a self-repairing, soft-state-based routing layer. The paper presents the Tapestry architecture, algorithms, and implementation. It explores the behavior of a Tapestry deployment on PlanetLab, a global testbed of approximately 100 machines. Experimental results show that Tapestry exhibits stable behavior and performance as an overlay, despite the instability of the underlying network layers. Several widely distributed applications have been implemented on Tapestry, illustrating its utility as a deployment infrastructure.
A Low-Power Fast-Transient 90-nm Low-Dropout Regulator With Multiple Small-Gain Stages A power-efficient 90-nm low-dropout regulator (LDO) with multiple small-gain stages is proposed in this paper. The proposed channel-resistance-insensitive small-gain stages provide loop gain enhancements without introducing low-frequency poles before the unity-gain frequency (UGF). As a result, both the loop gain and bandwidth of the LDO are improved, so that the accuracy and response speed of voltage regulation are significantly enhanced. As no on-chip compensation capacitor is required, the active chip area of the LDO is only 72.5 μm × 37.8 μm. Experimental results show that the LDO is capable of providing an output of 0.9 V with maximum output current of 50 mA from a 1-V supply. The LDO has a quiescent current of 9.3 μA, and has significantly improvement in line and load transient responses as well as performance in power-supply rejection ratio (PSRR).
Energy-Efficient Communication Protocol for Wireless Microsensor Networks Wireless distributed micro-sensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks.Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multihop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster base stations (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic net-works, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show that LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional routing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.
Type-2 Fuzzy Sets and Systems: An Overview [corrected reprint] As originally published in the February 2007 issue of IEEE Computational Intelligence Magazine, the above titled paper (ibid., vol. 2, no. 1, pp. 20-29, Feb 07) contained errors in mathematics that were introduced by the publisher. The corrected version is reprinted in its entirety.
Codejail: Application-Transparent Isolation of Libraries with Tight Program Interactions.
Distributed Primal-Dual Subgradient Method for Multiagent Optimization via Consensus Algorithms. This paper studies the problem of optimizing the sum of multiple agents' local convex objective functions, subject to global convex inequality constraints and a convex state constraint set over a network. Through characterizing the primal and dual optimal solutions as the saddle points of the Lagrangian function associated with the problem, we propose a distributed algorithm, named the distributed primal-dual subgradient method, to provide approximate saddle points of the Lagrangian function, based on the distributed average consensus algorithms. Under Slater's condition, we obtain bounds on the convergence properties of the proposed method for a constant step size. Simulation examples are provided to demonstrate the effectiveness of the proposed method.
Implementation of LTE SC-FDMA on the USRP2 software defined radio platform In this paper we discuss the implementation of a Single Carrier Frequency Division Multiple Access (SC-FDMA) transceiver running over the Universal Software Radio Peripheral 2 (USRP2). SC-FDMA is the air interface which has been selected for the uplink in the latest Long Term Evolution (LTE) standard. In this paper we derive an AWGN channel model for SC-FDMA transmission, which is useful for benchmarking experimental results. In our implementation, we deal with signal scaling, equalization and partial synchronization to realize SC-FDMA transmission over a noisy channel at rates up to 5.184 Mbit/s. Experimental results on the Bit Error Rate (BER) versus Signal-to-Noise Ratio (SNR) are presented and compared to theoretical and simulated performance.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.1
0.05
0.05
0.05
0.025
0.0125
0
0
0
0
0
0
0
0
Finite-time synchronization of delayed fractional-order heterogeneous complex networks. This paper is devoted to exploring the finite-time (FET) synchronization problem of time-varying delay fractional-order (FO) coupled heterogeneous complex networks (TFCHCNs) with external interference via a discontinuous feedback controller. Firstly, we propose a novel Lemma which is useful for discussing the FET stability and synchronization problem of FO systems. Secondly, based on the proposed Lemma, a discontinuous feedback controller is designed to guarantee the FET synchronization of TFCHCNs with external interference. Moreover, the upper bound of settling-time function is obtained. Finally, two simulation examples are provided to verify the practicability of our findings.
The Emergence of Intelligent Enterprises: From CPS to CPSS When IEEE Intelligent Systems solicited ideas for a new department, cyberphysical systems(CPS) received overwhelming support.Cyber-Physical-Social Systems is the new name for CPS. CPSS is the enabling platform technology that will lead us to an era of intelligent enterprises and industries. Internet use and cyberspace activities have created an overwhelming demand for the rapid development and application of CPSS. CPSS must be conducted with a multidisciplinary approach involving the physical, social, and cognitive sciences and that Al-based intelligent systems will be key to any successful construction and deployment.
Pinning impulsive directed coupled delayed dynamical network and its applications The main objective of the present paper is to further investigate pinning synchronisation of a complex delayed dynamical network with directionally coupling by a single impulsive controller. By developing the analysis procedure of pinning impulsive stability for undirected coupled dynamical network previously, some simple yet general criteria of pinning impulsive synchronisation for such directed coupled network are derived analytically. It is shown that a single impulsive controller can always pin a given directed coupled network to a desired homogenous solution, including an equilibrium point, a periodic orbit, or a chaotic orbit. Subsequently, the theoretical results are illustrated by a directed small-world complex network which is a cellular neural network (CNN) and a directed scale-free complex network with the well-known Hodgkin-Huxley neuron oscillators. Numerical simulations are finally given to demonstrate the effectiveness of the proposed control methodology.
Finite-Time Cluster Synchronization of Lur'e Networks: A Nonsmooth Approach. This paper is devoted to the finite-time cluster synchronization issue of nonlinearly coupled complex networks which consist of discontinuous Lur&#39;e systems. On the basis of the definition of Filippov regularization process and the measurable selection theorem, the discontinuously nonlinear function is mapped into a function-valued set, then a measurable function is accordingly selected from the Fi...
Analysis and pinning control for passivity of coupled different dimensional neural networks. In this paper, we discuss the passivity of coupled different dimensional neural networks. On the one hand, several passivity criteria for the coupled neural networks with different dimensional nodes are proposed by making using of some inequality techniques and Lyapunov functional method. Furthermore, we study the pinning passivity of coupled different dimensional neural networks with fixed and adaptive coupling strength, and obtain some sufficient conditions to ensure the pinning passivity of the considered network by designing proper pinning controllers. On the other hand, the passivity analysis and pinning control problem for coupled different dimensional delayed neural networks are studied similarly. Finally, the effectiveness of the derived results are verified by two numerical examples.
Trajectory Tracking on Uncertain Complex Networks via NN-Based Inverse Optimal Pinning Control. A new approach for trajectory tracking on uncertain complex networks is proposed. To achieve this goal, a neural controller is applied to a small fraction of nodes (pinned ones). Such controller is composed of an on-line identifier based on a recurrent high-order neural network, and an inverse optimal controller to track the desired trajectory; a complete stability analysis is also included. In order to verify the applicability and good performance of the proposed control scheme, a representative example is simulated, which consists of a complex network with each node described by a chaotic Lorenz oscillator.
Recent Advances on Dynamical Behaviors of Coupled Neural Networks With and Without Reaction–Diffusion Terms Recently, the dynamical behaviors of coupled neural networks (CNNs) with and without reaction-diffusion terms have been widely researched due to their successful applications in different fields. This article introduces some important and interesting results on this topic. First, synchronization, passivity, and stability analysis results for various CNNs with and without reaction-diffusion terms are summarized, including the results for impulsive, time-varying, time-invariant, uncertain, fuzzy, and stochastic network models. In addition, some control methods, such as sampled-data control, pinning control, impulsive control, state feedback control, and adaptive control, have been used to realize the desired dynamical behaviors in CNNs with and without reaction-diffusion terms. In this article, these methods are summarized. Finally, some challenging and interesting problems deserving of further investigation are discussed.
Finite-Time Passivity and Synchronization of Complex Dynamical Networks With State and Derivative Coupling. In this article, two kinds of complex dynamical networks (CDNs) with state and derivative coupling are investigated, respectively. First, some important concepts about finite-time passivity (FTP), finite-time output strict passivity, and finite-time input strict passivity are introduced. By making use of state-feedback controllers and adaptive state-feedback controllers, several sufficient conditi...
A Fast and High Quality Multilevel Scheme for Partitioning Irregular Graphs Recently, a number of researchers have investigated a class of graph partitioning algorithms that reduce the size of the graph by collapsing vertices and edges, partition the smaller graph, and then uncoarsen it to construct a partition for the original graph (Bui and Jones, Proc. of the 6th SIAM Conference on Parallel Processing for Scientific Computing, 1993, 445-452; Hen- drickson and Leland, A Multilevel Algorithm for Partitioning Graphs, Tech. report SAND 93-1301, Sandia National Laboratories, Albuquerque, NM, 1993). From the early work it was clear that multilevel techniques held great promise; however, it was not known if they can be made to con- sistently produce high quality partitions for graphs arising in a wide range of application domains. We investigate the effectiveness of many different choices for all three phases: coarsening, partition of the coarsest graph, and refinement. In particular, we present a new coarsening heuristic (called heavy-edge heuristic) for which the size of the partition of the coarse graph is within a small factor of the size of the final partition obtained after multilevel refinement. We also present a much faster variation of the Kernighan-Lin (KL) algorithm for refining during uncoarsening. We test our scheme on a large number of graphs arising in various domains including finite element methods, linear pro- gramming, VLSI, and transportation. Our experiments show that our scheme produces partitions that are consistently better than those produced by spectral partitioning schemes in substantially smaller time. Also, when our scheme is used to compute fill-reducing orderings for sparse matrices, it produces orderings that have substantially smaller fill than the widely used multiple minimum degree algorithm.
A general theory of phase noise in electrical oscillators A general model is introduced which is capable of making accurate, quantitative predictions about the phase noise of different types of electrical oscillators by acknowledging the true periodically time-varying nature of all oscillators. This new approach also elucidates several previously unknown design criteria for reducing close-in phase noise by identifying the mechanisms by which intrinsic de...
Noise in current-commutating passive FET mixers Noise in the mixer of zero-IF receivers can compromise the overall receiver sensitivity. The evolution of a passive CMOS mixer based on the knowledge of the physical mechanisms of noise in an active mixer is explained. Qualitative physical models that simply explain the frequency translation of both the flicker and white noise of different FETs in the mixer have been developed. Derived equations have been verified by simulations, and mixer optimization has been explained.
Decision making for cognitive radio equipment: analysis of the first 10 years of exploration. This article draws a general retrospective view on the first 10 years of cognitive radio (CR). More specifically, we explore in this article decision making and learning for CR from an equipment perspective. Thus, this article depicts the main decision making problems addressed by the community as general dynamic configuration adaptation (DCA) problems and discuss the suggested solution proposed in the literature to tackle them. Within this framework dynamic spectrum management is briefly introduced as a specific instantiation of DCA problems. We identified, in our analysis study, three dimensions of constrains: the environment's, the equipment's and the user's related constrains. Moreover, we define and use the notion of a priori knowledge, to show that the tackled challenges by the radio community during first 10 years of CR to solve decision making problems have often the same design space, however they differ by the a priori knowledge they assume available. Consequently, we suggest in this article, the "a priori knowledge" as a classification criteria to discriminate the main proposed techniques in the literature to solve configuration adaptation decision making problems. We finally discuss the impact of sensing errors on the decision making process as a prospective analysis.
A Low-Voltage Chopper-Stabilized Amplifier for Fetal ECG Monitoring With a 1.41 Power Efficiency Factor. This paper presents a low-voltage current-reuse chopper-stabilized frontend amplifier for fetal ECG monitoring. The proposed amplifier allows for individual tuning of the noise in each measurement channel, minimizing the total power consumption while satisfying all application requirements. The low-voltage current reuse topology exploits power optimization in both the current and the voltage domain, exploiting multiple supply voltages (0.3, 0.6 and 1.2 V). The power management circuitry providing the different supplies is optimized for high efficiency (peak charge-pump efficiency = 90%).The low-voltage amplifier together with its power management circuitry is implemented in a standard 0.18 μm CMOS process and characterized experimentally. The amplifier core achieves both good noise efficiency factor (NEF=1.74) and power efficiency factor (PEF=1.05). Experiments show that the amplifier core can provide a noise level of 0.34 μVrms in a 0.7 to 182 Hz band, consuming 1.17 μW power. The amplifier together with its power management circuitry consumes 1.56 μW, achieving a PEF of 1.41. The amplifier is also validated with adult ECG and pre-recorded fetal ECG measurements.
A 12-Bit Dynamic Tracking Algorithm-Based SAR ADC With Real-Time QRS Detection A 12-bit successive approximation register (SAR) ADC based on dynamic tracking algorithm and a real-time QRS-detection algorithm are proposed. The dynamic tracking algorithm features two tracking windows which are adjacent to prediction interval. This algorithm is able to track down the input signal's variation range and automatically adjust the subrange interval and update prediction code. QRS-complex detection algorithm integrates synchronous time sequential ADC and realtime QRS-detector. The chip is fabricated in a standard 0.13 μm CMOS process with a 0.6 V supply. Measurement results show that proposed ADC exhibits 10.72 effective number of bit (ENOB) and 79.63 dB spur-free dynamic range (SFDR) at 10k Hz sample rate given 41.5 Hz sinusoid input. The DNL and INL are bounded at -0.6/0.62 LSB and -0.67/1.43 LSBs. The ADC achieves FoM of 48 fJ/conversion-step at the best case. Also, the prototype is experimented with ECG signal input and extracts the heart beat signal successfully.
1.11
0.1
0.1
0.1
0.1
0.1
0.1
0.05
0
0
0
0
0
0
A 174.3-dB FoM VCO-Based CT ΔΣ Modulator With a Fully-Digital Phase Extended Quantizer and Tri-Level Resistor DAC in 130-nm CMOS. This paper presents a high dynamic range (DR) power-efficient voltage-controlled oscillator (VCO)-based continuous-time ΔΣ modulator. It introduces a robust and low-power fully-digital phase extended quantizer that doubles the VCO quantizer resolution compared to a conventional XOR-based phase detector. A tri-level resistor digital-to-analog converter is also introduced as complementary to the new...
Signal Folding in A/D Converters Signal folding appears in A/D converters (ADCs) in various ways. In this paper, the evolution of this technique is derived from the fundamentals of quantization to obtain systematic insights. We look upon folding as an automatic multiplexing of zero crossings, which simplifies hardware while preserving the high speed and low latency of a flash ADC. By appreciating similarities between the well-kno...
A 45 nm Resilient Microprocessor Core for Dynamic Variation Tolerance A 45 nm microprocessor core integrates resilient error-detection and recovery circuits to mitigate the clock frequency (FCLK) guardbands for dynamic parameter variations to improve throughput and energy efficiency. The core supports two distinct error-detection designs, allowing a direct comparison of the relative trade-offs. The first design embeds error-detection sequential (EDS) circuits in critical paths to detect late timing transitions. In addition to reducing the Fclk guardbands for dynamic variations, the embedded EDS design can exploit path-activation rates to operate the microprocessor faster than infrequently-activated critical paths. The second error-detection design offers a less-intrusive approach for dynamic timing-error detection by placing a tunable replica circuit (TRC) per pipeline stage to monitor worst-case delays. Although the TRCs require a delay guardband to ensure the TRC delay is always slower than critical-path delays, the TRC design captures most of the benefits from the embedded EDS design with less implementation overhead. Furthermore, while core min-delay constraints limit the potential benefits of the embedded EDS design, a salient advantage of the TRC design is the ability to detect a wider range of dynamic delay variation, as demonstrated through low supply voltage (VCC) measurements. Both error-detection designs interface with error-recovery techniques, enabling the detection and correction of timing errors from fast-changing variations such as high-frequency VCC droops. The microprocessor core also supports two separate error-recovery techniques to guarantee correct execution even if dynamic variations persist. The first technique requires clock control to replay errant instructions at 1/2FCLK. In comparison, the second technique is a new multiple-issue instruction replay design that corrects errant instructions with a lower performance penalty and without requiring clock control. Silico- - n measurements demonstrate that resilient circuits enable a 41% throughput gain at equal energy or a 22% energy reduction at equal throughput, as compared to a conventional design when executing a benchmark program with a 10% VCC droop. In addition, the microprocessor includes a new adaptive clock control circuit that interfaces with the resilient circuits and a phase-locked loop (PLL) to track recovery cycles and adapt to persistent errors by dynamically changing Fclk f°Γ maximum efficiency.
A Mostly Digital VCO-Based CT-SDM With Third-Order Noise Shaping. This paper presents the architectural concept and implementation of a mostly digital voltage-controlled oscillator-analog-to-digital converter (VCO-ADC) with third-order quantization noise shaping. The system is based on the combination of a VCO and a digital counter. It is shown how this combination can function as a continuous-time integrator to form a high-order continuous-time sigma-delta modu...
A 0.5-V 1.6-mW 2.4-GHz Fractional-N All-Digital PLL for Bluetooth LE With PVT-Insensitive TDC Using Switched-Capacitor Doubler in 28-nm CMOS. This paper proposes an ultra-low-voltage (ULV) fractional-N all-digital PLL (ADPLL) powered from a single 0.5-V supply. While its digitally controlled oscillator (DCO) runs directly at 0.5 V, an internal switched-capacitor dc-dc converter “doubles” the supply voltage to all the digital circuitry and particularly regulates the time-to-digital converter (TDC) supply to stabilize its resolution, thus...
An Ultra-Low Voltage Level Shifter Using Revised Wilson Current Mirror for Fast and Energy-Efficient Wide-Range Voltage Conversion from Sub-Threshold to I/O Voltage This paper presents a novel ultra-low voltage level shifter for fast and energy-efficient wide-range voltage conversion from sub-threshold to I/O voltage. By addressing the voltage drop and non-optimal feedback control in a state-of-the-art level shifter based on Wilson current mirror, the proposed level shifter with revised Wilson current mirror significantly improves the delay and power consumption while achieving a wide voltage conversion range. It also employs mixed-Vt device and device sizing aware of inverse narrow width effect to further improve the delay and power consumption. Measurement results at 0.18 μm show that compared with the Wilson current mirror based level shifter, the proposed level shifter improves the delay, switching energy and leakage power by up to 3×, 19×, 29× respectively, when converting 0.3 V to a voltage between 0.6 V and 3.3 V. More specifically, it achieves 1.03 (or 1.15) FO4 delay, 39 (or 954) fJ/transition and 160 (or 970) pW leakage power, when converting 0.3 V to 1.8 V (or 3.3 V), which is better than several state-of-the-art level shifters for similar range voltage conversion. The measurement results also show that the proposed level shifter has good delay scalability with supply voltage scaling and low sensitivity to process and temperature variations.
A Multimodal CMOS MEA for High-Throughput Intracellular Action Potential Measurements and Impedance Spectroscopy in Drug-Screening Applications. Multi-electrode arrays (MEAs) are a candidate technology to screen cardiotoxicity in vitro because they enable noninvasive recording of cardiac beating rate, electrical field potential duration, and other parameters. In this paper, we present an active MEA chip featuring 16 384 electrodes, 1024 simultaneous readout channels, and 64 stimulation units (SUs) to enable six different cell-interfacing m...
A 1-µW 10-bit 200-kS/s SAR ADC With a Bypass Window for Biomedical Applications This paper presents an energy efficient successive-approximation-register (SAR) analog-to-digital converter (ADC) for biomedical applications. To reduce energy consumption, a bypass window technique is used to select switching sequences to skip several conversion steps when the signal is within a predefined small window. The power consumptions of the capacitive digital-to-analog converter (DAC), latch comparator, and digital control circuit of the proposed ADC are lower than those of a conventional SAR ADC. The proposed bypass window tolerates the DAC settling error and comparator voltage offset in the first four phases and suppresses the peak DNL and INL values. A proof-of-concept prototype was fabricated in 0.18-μm 1P6M CMOS technology. At a 0.6-V supply voltage and a 200-kS/s sampling rate, the ADC achieves a signal-to-noise and distortion ratio of 57.97 dB and consumes 1.04 μW, resulting in a figure of merit of 8.03 fJ/conversion-step. The ADC core occupies an active area of only 0.082 mm2.
Scratchpad memory: design alternative for cache on-chip memory in embedded systems In this paper we address the problem of on-chip memory selection for computationally intensive applications, by proposing scratch pad memory as an alternative to cache. Area and energy for different scratch pad and cache sizes are computed using the CACTI tool while performance was evaluated using the trace results of the simulator. The target processor chosen for evaluation was AT91M40400. The results clearly establish scratehpad memory as a low power alternative in most situations with an average energy reducation of 40%. Further the average area-time reduction for the seratchpad memory was 46% of the cache memory.
Development of Integrated Broad-Band CMOS Low-Noise Amplifiers This paper presents a systematic design methodology for broad-band CMOS low-noise amplifiers (LNAs). The feedback technique is proposed to attain a better design tradeoff between gain and noise. The network synthesis is adopted for the implementation of broad-band matching networks. The sloped interstage matching is used for gain compensation. A fully integrated ultra-wide-band 0.18-mum CMOS LNA i...
Software radio architecture with smart antennas: a tutorial on algorithms and complexity There has been considerable interest in using antenna arrays in wireless communication networks to increase the capacity and decrease the cochannel interference. Adaptive beamforming with smart antennas at the receiver increases the carrier-to-interference ratio (CIR) in a wireless link. This paper considers a wireless network with beamforming capabilities at the receiver which allows two or more transmitters to share the same channel to communicate with the base station. The concrete computational complexity and algorithm structure of a base station are considered in terms of a software radio system model, initially with an omnidirectional antenna. The software radio computational model is then expanded to characterize a network with smart antennas. The application of the software radio smart antenna is demonstrated through two examples. First, traffic improvement in a network with a smart antenna is considered, and the implementation of a hand-off algorithm in the software radio is presented. The blocking probabilities of the calls and total carried traffic in the system under different traffic policies are derived. The analytical and numerical results show that adaptive beamforming at the receiver reduces the probability of blocking and forced termination of the calls and increases the total carried traffic in the system. Then, a joint beamforming and power control algorithm is implemented in a software radio smart antenna in a CDMA network. This shows that, by using smart antennas, each user can transmit with much lower power, and therefore the system capacity increases significantly
Accelerating microprocessor silicon validation by exposing ISA diversity Microprocessor design validation is a time consuming and costly task that tends to be a bottleneck in the release of new architectures. The validation step that detects the vast majority of design bugs is the one that stresses the silicon prototypes by applying huge numbers of random tests. Despite its bug detection capability, this step is constrained by extreme computing needs for random tests simulation to extract the bug-free memory image for comparison with the actual silicon image. We propose a self-checking method that accelerates silicon validation and significantly increases the number of applied random tests to improve bug detection efficiency and reduce time-to-market. Analysis of four major ISAs (ARM, MIPS, PowerPC, and x86) reveals their inherent diversity: more than three quarters of the instructions can be replaced with equivalent instructions. We exploit this property in post-silicon validation and propose a methodology for the generation of random tests that detect bugs by comparing results of equivalent instructions. We support our bug detection method in hardware with a light-weight mechanism which, in case of a mismatch, replays the random test replacing the offending instruction with its equivalent. Our bug detection method and corresponding hardware significantly accelerate the post-silicon validation process. Evaluation of the method on an x86 microprocessor model demonstrates its efficiency over simulation-based and self-checking alternatives, in terms of bug detection capabilities and validation time speedup.
Scheduling Analysis of TDMA-Constrained Tasks: Illustration with Software Radio Protocols In this paper a new task model is proposed for scheduling analysis of dependent tasks in radio stations that embed a TDMA communication protocol. TDMA is a channel access protocol that allows several stations to communicate in a same network, by dividing time into several time slots. Tasks handling the TDMA radio protocol are scheduled in a manner to be compliant with the TDMA configuration: task parameters such as execution times, deadlines and release times are constrained by TDMA slots. The periodic task model, commonly used in scheduling analysis, is inefficient for the accurate specification of such systems, resulting in pessimistic scheduling analysis results. To encompass this issue, this paper proposes a new task model called Dependent General Multiframe (DGMF). This model extends the existing GMF model with precedence dependency and shared resource synchronization. We show how to perform scheduling analysis with DGMF by transforming it into a transaction model and using a schedulability test we proposed. In this paper we experiment on "software radio protocols" from Thales Communications & Security, which are representative of the system we want to analyze. Experimental results show an improvement of system schedulability using the proposed analysis technique, compared to existing ones (GMF and periodic tasks). The new task model thus provides a technique to model and analyze TDMA systems with less pessimistic results.
Robust Biopotential Acquisition via a Distributed Multi-Channel FM-ADC. This contribution presents an active electrode system for biopotential acquisition using a distributed multi-channel FM-modulated analog front-end and ADC architecture. Each electrode captures one biopotential signal and converts to a frequency modulated signal using a VCO tuned to a unique frequency. Each electrode then buffers its output onto a shared analog line that aggregates all of the FM-mo...
1.077778
0.086667
0.086667
0.086667
0.086667
0.066667
0.04
0.000833
0
0
0
0
0
0
Broadband GaN MMIC Doherty Power Amplifier Using Continuous-Mode Combining for 5G Sub-6 GHz Applications This article presents a broadband fully integrated Doherty power amplifier (DPA) using a continuous-mode combining load. It is illustrated that the continuous-mode impedance condition in back-off and saturation for Doherty operation can be achieved with a simple impedance inverter network (IIN) that can be realized using lumped components in gallium nitride (GaN) monolithic microwave integrated circuits (MMICs). A DPA was designed and fabricated using the 250-nm GaN process to validate the proposed architecture and design methodology. The fabricated DPA chip attains around 8 W saturated power from 4.1 to 5.6 GHz. About 38.5%–46.5% drain efficiencies are achieved at 6-dB output power back-off within the entire design band. When driven by a 100-MHz OFDM signal with 6.5-dB peak-to-average power ratio (PAPR), the proposed DPA achieves better than −45-dBc adjacent channel leakage ratio (ACLR) and higher than 38% average efficiency at 4.4 and 5.2 GHz after digital predistortion.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Intelligent Multiagent Coordination Based On Reinforcement Hierarchical Neuro-Fuzzy Models This paper presents the research and development of two hybrid neuro-fuzzy models for the hierarchical coordination of multiple intelligent agents. The main objective of the models is to have multiple agents interact intelligently with each other in complex systems. We developed two new models of coordination for intelligent multiagent systems, which integrates the Reinforcement Learning Hierarchical Neuro-Fuzzy model with two proposed coordination mechanisms: the MultiAgent Reinforcement Learning Hierarchical Neuro-Fuzzy with a market-driven coordination mechanism (MA-RL-HNFP-MD) and the MultiAgent Reinforcement Learning Hierarchical Neuro-Fuzzy with graph coordination (MA-RL-HNFP-CG). In order to evaluate the proposed models and verify the contribution of the proposed coordination mechanisms, two multiagent benchmark applications were developed: the pursuit game and the robot soccer simulation. The results obtained demonstrated that the proposed coordination mechanisms greatly improve the performance of the multiagent system when compared with other strategies.
Multiobjective evolutionary algorithms: A survey of the state of the art A multiobjective optimization problem involves several conflicting objectives and has a set of Pareto optimal solutions. By evolving a population of solutions, multiobjective evolutionary algorithms (MOEAs) are able to approximate the Pareto optimal set in a single run. MOEAs have attracted a lot of research effort during the last 20 years, and they are still one of the hottest research areas in the field of evolutionary computation. This paper surveys the development of MOEAs primarily during the last eight years. It covers algorithmic frameworks such as decomposition-based MOEAs (MOEA/Ds), memetic MOEAs, coevolutionary MOEAs, selection and offspring reproduction operators, MOEAs with specific search methods, MOEAs for multimodal problems, constraint handling and MOEAs, computationally expensive multiobjective optimization problems (MOPs), dynamic MOPs, noisy MOPs, combinatorial and discrete MOPs, benchmark problems, performance indicators, and applications. In addition, some future research issues are also presented.
Optimal Tracking Control of Motion Systems Tracking control of motion systems typically requires accurate nonlinear friction models, especially at low speeds, and integral action. However, building accurate nonlinear friction models is time consuming, friction characteristics dramatically change over time, and special care must be taken to avoid windup in a controller employing integral action. In this paper a new approach is proposed for the optimal tracking control of motion systems with significant disturbances, parameter variations, and unmodeled dynamics. The ‘desired’ control signal that will keep the nominal system on the desired trajectory is calculated based on the known system dynamics and is utilized in a performance index to design an optimal controller. However, in the presence of disturbances, parameter variations, and unmodeled dynamics, the desired control signal must be adjusted. This is accomplished by using neural network based observers to identify these quantities, and update the control signal on-line. This formulation allows for excellent motion tracking without the need for the addition of an integral state. The system stability is analyzed and Lyapunov based weight update rules are applied to the neural networks to guarantee the boundedness of the tracking error, disturbance estimation error, and neural network weight errors. Experiments are conducted on the linear axes of a mini CNC machine for the contour control of two orthogonal axes, and the results demonstrate the excellent performance of the proposed methodology.
Adaptive tracking control of leader-follower systems with unknown dynamics and partial measurements. In this paper, a decentralized adaptive tracking control is developed for a second-order leader–follower system with unknown dynamics and relative position measurements. Linearly parameterized models are used to describe the unknown dynamics of a self-active leader and all followers. A new distributed system is obtained by using the relative position and velocity measurements as the state variables. By only using the relative position measurements, a dynamic output–feedback tracking control together with decentralized adaptive laws is designed for each follower. At the same time, the stability of the tracking error system and the parameter convergence are analyzed with the help of a common Lyapunov function method. Some simulation results are presented to validate the proposed adaptive tracking control.
Plug-and-Play Decentralized Model Predictive Control for Linear Systems In this technical note, we consider a linear system structured into physically coupled subsystems and propose a decentralized control scheme capable to guarantee asymptotic stability and satisfaction of constraints on system inputs and states. The design procedure is totally decentralized, since the synthesis of a local controller uses only information on a subsystem and its neighbors, i.e. subsystems coupled to it. We show how to automatize the design of local controllers so that it can be carried out in parallel by smart actuators equipped with computational resources and capable to exchange information with neighboring subsystems. In particular, local controllers exploit tube-based Model Predictive Control (MPC) in order to guarantee robustness with respect to physical coupling among subsystems. Finally, an application of the proposed control design procedure to frequency control in power networks is presented.
Event-Based Leader-following Consensus of Multi-Agent Systems with Input Time Delay The event-based control strategy is an effective methodology for tackling the distributed control of multi-agent systems with limited on-board resources. This technical note focuses on event-based leader-following consensus for multi-agent systems described by general linear models and subject to input time delay between controller and actuator. For each agent, the controller updates are event-based and only triggered at its own event times. A necessary condition and two sufficient conditions on leader-following consensus are presented, respectively. It is shown that continuous communication between neighboring agents can be avoided and the Zeno-behavior of triggering time sequences is excluded. A numerical example is presented to illustrate the effectiveness of the obtained theoretical results.
Adaptive Cooperative Output Regulation for a Class of Nonlinear Multi-Agent Systems In this technical note, an adaptive cooperative output regulation problem for a class of nonlinear multi-agent systems is considered. The cooperative output regulation problem is first converted into an adaptive stabilization problem for an augmented multi-agent system. A distributed adaptive control law with adoption of Nussbaum gain technique is then proposed to globally stabilize this augmented system. This control scheme is designed such that, in the presence of unknown control direction and large parameter variations in each agent, the closed-loop system maintains global stability and the output of each agent tracks a class of prescribed signals asymptotically.
Self-constructing wavelet neural network algorithm for nonlinear control of large structures An adaptive control algorithm is presented for nonlinear vibration control of large structures subjected to dynamic loading. It is based on integration of a self-constructing wavelet neural network (SCWNN) developed specifically for structural system identification with an adaptive fuzzy sliding mode control approach. The algorithm is particularly suitable when the physical properties such as the stiffnesses and damping ratios of the structural system are unknown or partially known which is the case when a structure is subjected to an extreme dynamic event such as an earthquake as the structural properties change during the event. SCWNN is developed for functional approximation of the nonlinear behavior of large structures using neural networks and wavelets. In contrast to earlier work, the identification and control are processed simultaneously which makes the resulting adaptive control more applicable to real life situations. A two-part growing and pruning criterion is developed to construct the hidden layer in the neural network automatically. A fuzzy compensation controller is developed to reduce the chattering phenomenon. The robustness of the proposed algorithm is achieved by deriving a set of adaptive laws for determining the unknown parameters of wavelet neural networks using two Lyapunov functions. No offline training of neural network is necessary for the system identification process. In addition, the earthquake signals are considered as unidentified. This is particularly important for on-line vibration control of large civil structures since the external dynamic loading due to earthquake is not available in advance. The model is applied to vibration control of a continuous cast-in-place prestressed concrete box-girder bridge benchmark problem seismically excited highway.
Gradient-Based Learning Applied to Document Recognition Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper rev...
Local and global properties in networks of processors (Extended Abstract) This paper attempts to get at some of the fundamental properties of distributed computing by means of the following question: “How much does each processor in a network of processors need to know about its own identity, the identities of other processors, and the underlying connection network in order for the network to be able to carry out useful functions?” The approach we take is to require that the processors be designed without any knowledge (or only very broad knowledge) of the networks they are to be used in, and furthermore, that all processors with the same number of communication ports be identical. Given a particular network function, e.g., setting up a spanning tree, we ask whether processors may be designed so that when they are embedded in any connected network and started in some initial configuration, they are guaranteed to accomplish the desired function.
Mdvm System Concept, Paging Latency And Round-2 Randomized Leader Election Algorithm In Sg The future trend in the computing paradigm is marked by mobile computing based on mobile-client/server architecture connected by wireless communication network. However, the mobile computing systems have limitations because of the resource-thin mobile clients operating on battery power. The MDVM system allows the mobile clients to utilize memory and CPU resources of Server-Groups (SG) to overcome the resource limitations of clients in order to support the high-end mobile applications such as, m-commerce and virtual organization (VO). In this paper the concept ofMDVM system and the architecture of cellular network containing the SG are discussed. A round-2 randomized distributed algorithm is proposed to elect a unique leader and co-leader of the SG. The algorithm is free from any assumption about network topology, buffer space limitations and is based on dynamically elected coordinators eliminating single point of failure. The algorithm is implemented in distributed system setup and the network-paging latency values of wired and wireless networks are measured experimentally. The experimental results demonstrate that in most cases the algorithm successfully terminates in first round and the possibility of second round execution decreases significantly with the increase in the size of SG (vertical bar N-a vertical bar). The overall message complexity of the algorithm is O(vertical bar N-a vertical bar). The comparative study of network-paging latencies indicates that 3G/4G mobile communication systems would support the realization of MDVM system.
Sequential approximation of feasible parameter sets for identification with set membership uncertainty In this paper the problem of approximating the feasible parameter set for identification of a system in a set membership setting is considered. The system model is linear in the unknown parameters. A recursive procedure providing an approximation of the parameter set of interest through parallelotopes is presented, and an efficient algorithm is proposed. Its computational complexity is similar to that of the commonly used ellipsoidal approximation schemes. Numerical results are also reported on some simulation experiments conducted to assess the performance of the proposed algorithm.
A 10-Bit 800-MHz 19-mW CMOS ADC A pipelined ADC employs charge-steering op amps to relax the trade-offs among speed, noise, and power consumption. Applying full-rate nonlinearity and gain error calibration, a prototype realized in 65-nm CMOS technology achieves an SNDR of 52.2 dB at an input frequency of 399.2MHz and an FoM of 53 fJ/conversion-step.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.213333
0.213333
0.213333
0.213333
0.213333
0.213333
0.213333
0.06
0
0
0
0
0
0
Neural Cache: Bit-Serial In-Cache Acceleration of Deep Neural Networks. This paper presents the Neural Cache architecture, which re-purposes cache structures to transform them into massively parallel compute units capable of running inferences for Deep Neural Networks. Techniques to do in-situ arithmetic in SRAM arrays, create efficient data mapping and reducing data movement are proposed. The Neural Cache architecture is capable of fully executing convolutional, fully connected, and pooling layers in-cache. The proposed architecture also supports quantization in-cache. Our experimental results show that the proposed architecture can improve inference latency by 18.3X over state-of-art multi-core CPU (Xeon E5), 7.7X over server class GPU (Titan Xp), for Inception v3 model. Neural Cache improves inference throughput by 12.4X over CPU (2.2X over GPU), while reducing power consumption by 50% over CPU (53% over GPU).
FAFNIR: Accelerating Sparse Gathering by Using Efficient Near-Memory Intelligent Reduction Memory-bound sparse gathering, caused by irregular random memory accesses, has become an obstacle in several on-demand applications such as embedding lookup in recommendation systems. To reduce the amount of data movement, and thereby better utilize memory bandwidth, previous studies have proposed near-data processing (NDP) solutions. The issue of prior work, however, is that they either minimize ...
GP-SIMD Processing-in-Memory GP-SIMD, a novel hybrid general-purpose SIMD computer architecture, resolves the issue of data synchronization by in-memory computing through combining data storage and massively parallel processing. GP-SIMD employs a two-dimensional access memory with modified SRAM storage cells and a bit-serial processing unit per each memory row. An analytic performance model of the GP-SIMD architecture is presented, comparing it to associative processor and to conventional SIMD architectures. Cycle-accurate simulation of four workloads supports the analytical comparison. Assuming a moderate die area, GP-SIMD architecture outperforms both the associative processor and conventional SIMD coprocessor architectures by almost an order of magnitude while consuming less power.
Near memory data structure rearrangement As CPU core counts continue to increase, the gap between compute power and available memory bandwidth has widened. A larger and deeper cache hierarchy benefits locality-friendly computation, but offers limited improvement to irregular, data intensive applications. In this work we explore a novel approach to accelerating these applications through in-memory data restructuring. Unlike other proposed processing-in-memory architectures, the rearrangement hardware performs data reduction, not compute offload. Using a custom FPGA emulator, we quantitatively evaluate performance and energy benefits of near-memory hardware structures that dynamically restructure in-memory data to cache-friendly layout, minimizing wasted memory bandwidth. Our results on representative irregular benchmarks using the Micron Hybrid Memory Cube memory model show speedup, bandwidth savings, and energy reduction. We present an API for the near-memory accelerator and describe the interaction between the CPU and the rearrangement hardware with application examples. The merits of an SRAM vs. a DRAM scratchpad buffer for rearranged data are explored.
D-RaNGe: Using Commodity DRAM Devices to Generate True Random Numbers with Low Latency and High Throughput We propose a new DRAM-based true random number generator (TRNG) that leverages DRAM cells as an entropy source. The key idea is to intentionally violate the DRAM access timing parameters and use the resulting errors as the source of randomness. Our technique specifically decreases the DRAM row activation latency (timing parameter tRCD) below manufacturer-recommended specifications, to induce read errors, or activation failures, that exhibit true random behavior. We then aggregate the resulting data from multiple cells to obtain a TRNG capable of providing a high throughput of random numbers at low latency. To demonstrate that our TRNG design is viable using commodity DRAM chips, we rigorously characterize the behavior of activation failures in 282 state-of-the-art LPDDR4 devices from three major DRAM manufacturers. We verify our observations using four additional DDR3 DRAM devices from the same manufacturers. Our results show that many cells in each device produce random data that remains robust over both time and temperature variation. We use our observations to develop D-RanGe, a methodology for extracting true random numbers from commodity DRAM devices with high throughput and low latency by deliberately violating the read access timing parameters. We evaluate the quality of our TRNG using the commonly-used NIST statistical test suite for randomness and find that D-RaNGe: 1) successfully passes each test, and 2) generates true random numbers with over two orders of magnitude higher throughput than the previous highest-throughput DRAM-based TRNG.
Softermax: Hardware/Software Co-Design of an Efficient Softmax for Transformers Transformers have transformed the field of natural language processing. Their superior performance is largely attributed to the use of stacked “self-attention” layers, each of which consists of matrix multiplies as well as softmax operations. As a result, unlike other neural networks, the softmax operation accounts for a significant fraction of the total run-time of Transformers. To address this, ...
Efficient sparse-matrix multi-vector product on GPUs. Sparse Matrix-Vector (SpMV) and Sparse Matrix-Multivector (SpMM) products are key kernels for computational science and data science. While GPUs offer significantly higher peak performance and memory bandwidth than multicore CPUs, achieving high performance on sparse computations on GPUs is very challenging. A tremendous amount of recent research has focused on various GPU implementations of the SpMV kernel. But the multi-vector SpMM kernel has received much less attention. In this paper, we present an in-depth analysis to contrast SpMV and SpMM, and develop a new sparse-matrix representation and computation approach suited to achieving high data-movement efficiency and effective GPU parallelization of SpMM. Experimental evaluation using the entire SuiteSparse matrix suite demonstrates significant performance improvement over existing SpMM implementations from vendor libraries.
In-Memory Data Parallel Processor. Recent developments in Non-Volatile Memories (NVMs) have opened up a new horizon for in-memory computing. Despite the significant performance gain offered by computational NVMs, previous works have relied on manual mapping of specialized kernels to the memory arrays, making it infeasible to execute more general workloads. We combat this problem by proposing a programmable in-memory processor architecture and data-parallel programming framework. The efficiency of the proposed in-memory processor comes from two sources: massive parallelism and reduction in data movement. A compact instruction set provides generalized computation capabilities for the memory array. The proposed programming framework seeks to leverage the underlying parallelism in the hardware by merging the concepts of data-flow and vector processing. To facilitate in-memory programming, we develop a compilation framework that takes a TensorFlow input and generates code for our in-memory processor. Our results demonstrate 7.5x speedup over a multi-core CPU server for a set of applications from Parsec and 763x speedup over a server-class GPU for a set of Rodinia benchmarks.
PipeLayer: A Pipelined ReRAM-Based Accelerator for Deep Learning Convolution neural networks (CNNs) are the heart of deep learning applications. Recent works PRIME [1] and ISAAC [2] demonstrated the promise of using resistive random access memory (ReRAM) to perform neural computations in memory. We found that training cannot be efficiently supported with the current schemes. First, they do not consider weight update and complex data dependency in training procedure. Second, ISAAC attempts to increase system throughput with a very deep pipeline. It is only beneficial when a large number of consecutive images can be fed into the architecture. In training, the notion of batch (e.g. 64) limits the number of images can be processed consecutively, because the images in the next batch need to be processed based on the updated weights. Third, the deep pipeline in ISAAC is vulnerable to pipeline bubbles and execution stall. In this paper, we present PipeLayer, a ReRAM-based PIM accelerator for CNNs that support both training and testing. We analyze data dependency and weight update in training algorithms and propose efficient pipeline to exploit inter-layer parallelism. To exploit intra-layer parallelism, we propose highly parallel design based on the notion of parallelism granularity and weight replication. With these design choices, PipeLayer enables the highly pipelined execution of both training and testing, without introducing the potential stalls in previous work. The experiment results show that, PipeLayer achieves the speedups of 42.45x compared with GPU platform on average. The average energy saving of PipeLayer compared with GPU implementation is 7.17x.
Understanding Reuse, Performance, and Hardware Cost of DNN Dataflow: A Data-Centric Approach The data partitioning and scheduling strategies used by DNN accelerators to leverage reuse and perform staging are known as dataflow, which directly impacts the performance and energy efficiency of DNN accelerators. An accelerator micro architecture dictates the dataflow(s) that can be employed to execute layers in a DNN. Selecting a dataflow for a layer can have a large impact on utilization and energy efficiency, but there is a lack of understanding on the choices and consequences of dataflow, and of tools and methodologies to help architects explore the co-optimization design space. In this work, we first introduce a set of data-centric directives to concisely specify the DNN dataflow space in a compiler-friendly form. We then show how these directives can be analyzed to infer various forms of reuse and to exploit them using hardware capabilities. We codify this analysis into an analytical cost model, MAESTRO (Modeling Accelerator Efficiency via Patio-Temporal Reuse and Occupancy), that estimates various cost-benefit tradeoffs of a dataflow including execution time and energy efficiency for a DNN model and hardware configuration. We demonstrate the use of MAESTRO to drive a hardware design space exploration experiment, which searches across 480M designs to identify 2.5M valid designs at an average rate of 0.17M designs per second, including Pareto-optimal throughput- and energy-optimized design points.
Interprocedural pointer alias analysis We present practical approximation methods for computing and representing interprocedural aliases for a program written in a language that includes pointers, reference parameters, and recursion. We present the following contributions: (1) a framework for interprocedural pointer alias analysis that handles function pointers by constructing the program call graph while alias analysis is being performed; (2) a flow-sensitive interprocedural pointer alias analysis algorithm; (3) a flow-insensitive interprocedural pointer alias analysis algorithm; (4) a flow-insensitive interprocedural pointer alias analysis algorithm that incorporates kill information to improve precision; (5) empirical measurements of the efficiency and precision of the three interprocedural alias analysis algorithms.
Asymptotic stability for time-variant systems and observability: Uniform and nonuniform criteria This paper presents some new criteria for uniform and nonuniform asymptotic stability of equilibria for time-variant differential equations and this within a Lyapunov approach. The stability criteria are formulated in terms of certain observability conditions with the output derived from the Lyapunov function. For some classes of systems, this system theoretic interpretation proves to be fruitful since-after establishing the invariance of observability under output injection-this enables us to check the stability criteria on a simpler system. This procedure is illustrated for some classical examples.
Model Predictive Climate Control of a Swiss Office Building: Implementation, Results, and Cost-Benefit Analysis This paper reports the final results of the predictive building control project OptiControl-II that encompassed seven months of model predictive control (MPC) of a fully occupied Swiss office building. First, this paper provides a comprehensive literature review of experimental building MPC studies. Second, we describe the chosen control setup and modeling, the main experimental results, as well as simulation-based comparisons of MPC to industry-standard control using the EnergyPlus simulation software. Third, the costs and benefits of building MPC for cases similar to the investigated building are analyzed. In the experiments, MPC controlled the building reliably and achieved a good comfort level. The simulations suggested a significantly improved control performance in terms of energy and comfort compared with the previously installed industry-standard control strategy. However, for similar buildings and with the tools currently available, the required initial investment is likely too high to justify the deployment in everyday building projects on the basis of operating cost savings alone. Nevertheless, development investments in an MPC building automation framework and a tool for modeling building thermal dynamics together with the increasing importance of demand response and rising energy prices may push the technology into the net benefit range.
Randomized Last-Level Caches Are Still Vulnerable to Cache Side-Channel Attacks! But We Can Fix It Cache randomization has recently been revived as a promising defense against conflict-based cache side-channel attacks. As two of the latest implementations, CEASER-S and ScatterCache both claim to thwart conflict-based cache side-channel attacks using randomized skewed caches. Unfortunately, our experiments show that an attacker can easily find a usable eviction set within the chosen remap period...
1.016775
0.014286
0.014286
0.014286
0.014286
0.014286
0.014286
0.012552
0.004838
0.000058
0
0
0
0
Grand Pwning Unit: Accelerating Microarchitectural Attacks with the GPU Dark silicon is pushing processor vendors to add more specialized units such as accelerators to commodity processor chips. Unfortunately this is done without enough care to security. In this paper we look at the security implications of integrated Graphical Processor Units (GPUs) found in almost all mobile processors. We demonstrate that GPUs, already widely employed to accelerate a variety of benign applications such as image rendering, can also be used to "accelerate" microarchitectural attacks (i.e., making them more effective) on commodity platforms. In particular, we show that an attacker can build all the necessary primitives for performing effective GPU-based microarchitectural attacks and that these primitives are all exposed to the web through standardized browser extensions, allowing side-channel and Rowhammer attacks from JavaScript. These attacks bypass state-of-the-art mitigations and advance existing CPU-based attacks: we show the first end-to-end microarchitectural compromise of a browser running on a mobile phone in under two minutes by orchestrating our GPU primitives. While powerful, these GPU primitives are not easy to implement due to undocumented hardware features. We describe novel reverse engineering techniques for peeking into the previously unknown cache architecture and replacement policy of the Adreno 330, an integrated GPU found in many common mobile platforms. This information is necessary when building shader programs implementing our GPU primitives. We conclude by discussing mitigations against GPU-enabled attackers.
Exploiting Correcting Codes: On the Effectiveness of ECC Memory Against Rowhammer Attacks Given the increasing impact of Rowhammer, and the dearth of adequate other hardware defenses, many in the security community have pinned their hopes on error-correcting code (ECC) memory as one of the few practical defenses against Rowhammer attacks. Specifically, the expectation is that the ECC algorithm will correct or detect any bits they manage to flip in memory in real-world settings. However, the extent to which ECC really protects against Rowhammer is an open research question, due to two key challenges. First, the details of the ECC implementations in commodity systems are not known. Second, existing Rowhammer exploitation techniques cannot yield reliable attacks in presence of ECC memory. In this paper, we address both challenges and provide concrete evidence of the susceptibility of ECC memory to Rowhammer attacks. To address the first challenge, we describe a novel approach that combines a custom-made hardware probe, Rowhammer bit flips, and a cold boot attack to reverse engineer ECC functions on commodity AMD and Intel processors. To address the second challenge, we present ECCploit, a new Rowhammer attack based on composable, data-controlled bit flips and a novel side channel in the ECC memory controller. We show that, while ECC memory does reduce the attack surface for Rowhammer, ECCploit still allows an attacker to mount reliable Rowhammer attacks against vulnerable ECC memory on a variety of systems and configurations. In addition, we show that, despite the non-trivial constraints imposed by ECC, ECCploit can still be powerful in practice and mimic the behavior of prior Rowhammer exploits.
Towards Evaluating the Robustness of Neural Networks Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95% to 0.5%. In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.
TRRespass: Exploiting the Many Sides of Target Row Refresh After a plethora of high-profile RowHammer attacks, CPU and DRAM vendors scrambled to deliver what was meant to be the definitive hardware solution against the RowHammer problem: Target Row Refresh (TRR). A common belief among practitioners is that, for the latest generation of DDR4 systems that are protected by TRR, RowHammer is no longer an issue in practice. However, in reality, very little is known about TRR. How does TRR exactly prevent RowHammer? Which parts of a system are responsible for operating the TRR mechanism? Does TRR completely solve the RowHammer problem or does it have weaknesses? In this paper, we demystify the inner workings of TRR and debunk its security guarantees. We show that what is advertised as a single mitigation mechanism is actually a series of different solutions coalesced under the umbrella term Target Row Refresh. We inspect and disclose, via a deep analysis, different existing TRR solutions and demonstrate that modern implementations operate entirely inside DRAM chips. Despite the difficulties of analyzing in-DRAM mitigations, we describe novel techniques for gaining insights into the operation of these mitigation mechanisms. These insights allow us to build TRRespass, a scalable black-box RowHammer fuzzer that we evaluate on 42 recent DDR4 modules. TRRespass shows that even the latest generation DDR4 chips with in-DRAM TRR, immune to all known RowHammer attacks, are often still vulnerable to new TRR-aware variants of RowHammer that we develop. In particular, TRRespass finds that, on present-day DDR4 modules, RowHammer is still possible when many aggressor rows are used (as many as 19 in some cases), with a method we generally refer to as Many-sided RowHammer. Overall, our analysis shows that 13 out of the 42 modules from all three major DRAM vendors (i.e., Samsung, Micron, and Hynix) are vulnerable to our TRR-aware RowHammer access patterns, and thus one can still mount existing state-of-the-art system-level RowHammer attacks. In addition to DDR4, we also experiment with LPDDR4(X) <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sup> chips and show that they are susceptible to RowHammer bit flips too. Our results provide concrete evidence that the pursuit of better RowHammer mitigations must continue.
Virtual Platform to Analyze the Security of a System on Chip at Microarchitectural Level The processors (CPUs) embedded in System on Chip (SoC) have to face recent attacks taking advantage of vulnerabilities/features in their microarchitectures to retrieve secret information. Indeed, the increase in complexity of modern CPU and SoC is mainly driven by the seek of performance rather than security. Even if efforts like isolation techniques have been taken to thwart cyberattacks, most mi...
ABSynthe: Automatic Blackbox Side-channel Synthesis on Commodity Microarchitectures
InvisiSpec - Making Speculative Execution Invisible in the Cache Hierarchy. Hardware speculation offers a major surface for micro-architectural covert and side channel attacks. Unfortunately, defending against speculative execution attacks is challenging. The reason is that speculations destined to be squashed execute incorrect instructions, outside the scope of what programmers and compilers reason about. Further, any change to micro-architectural state made by speculative execution can leak information. In this paper, we propose InvisiSpec, a novel strategy to defend against hardware speculation attacks in multiprocessors by making speculation invisible in the data cache hierarchy. InvisiSpec blocks micro-architectural covert and side channels through the multiprocessor data cache hierarchy due to speculative loads. In InvisiSpec, unsafe speculative loads read data into a speculative buffer, without modifying the cache hierarchy. When the loads become safe, InvisiSpec makes them visible to the rest of the system. InvisiSpec identifies loads that might have violated memory consistency and, at this time, forces them to perform a validation step. We propose two InvisiSpec designs: one to defend against Spectre-like attacks and another to defend against futuristic attacks, where any speculative load may pose a threat. Our simulations with 23 SPEC and 10 PARSEC workloads show that InvisiSpec is effective. Under TSO, using fences to defend against Spectre attacks slows down execution by 74% relative to a conventional, insecure processor; InvisiSpec reduces the execution slowdown to only 21%. Using fences to defend against futuristic attacks slows down execution by 208%; InvisiSpec reduces the slowdown to 72%.
Cache Storage Channels: Alias-Driven Attacks and Verified Countermeasures Caches pose a significant challenge to formal proofs of security for code executing on application processors, as the cache access pattern of security-critical services may leak secret information. This paper reveals a novel attack vector, exposing a low-noise cache storage channel that can be exploited by adapting well-known timing channel analysis techniques. The vector can also be used to attack various types of security-critical software such as hypervisors and application security monitors. The attack vector uses virtual aliases with mismatched memory attributes and self-modifying code to misconfigure the memory system, allowing an attacker to place incoherent copies of the same physical address into the caches and observe which addresses are stored in different levels of cache. We design and implement three different attacks using the new vector on trusted services and report on the discovery of an 128-bit key from an AES encryption service running in TrustZone on Raspberry Pi 2. Moreover, we subvert the integrity properties of an ARMv7 hypervisor that was formally verified against a cache-less model. We evaluate well-known countermeasures against the new attack vector and propose a verification methodology that allows to formally prove the effectiveness of defence mechanisms on the binary code of the trusted software.
Beyond Stack Smashing: Recent Advances in Exploiting Buffer Overruns This article describes three powerful general-purpose families of exploits for buffer overruns: arc injection, pointer subterfuge, and heap smashing. These new techniques go beyond the traditional "stack smashing" attack and invalidate traditional assumptions about buffer overruns.
Distributed reset A reset subsystem is designed that can be embedded in an arbitrary distributed system in order to allow the system processes to reset the system when necessary. Our design is layered, and comprises three main components: a leader election, a spanning tree construction, and a diffusing computation. Each of these components is self-stabilizing in the following sense: if the coordination between the up-processes in the system is ever lost (due to failures or repairs of processes and channels), then each component eventually reaches a state where coordination is regained. This capability makes our reset subsystem very robust: it can tolerate fail-stop failures and repairs of processes and channels, even when a reset is in progress
An 8-bit 100-mhz cmos linear interpolation dac An 8-bit 100-MHz CMOS linear interpolation digital-to-analog converter (DAC) is presented. It applies a time-interleaved structure on an 8-bit binary-weighted DAC, using 16 evenly skewed clocks generated by a voltage-controlled delay line to realize the linear interpolation function. The linear interpolation increases the attenuation of the DAC&#39;s image components. The requirement for the analog re...
Automating the Verification of SDR Base band Signal Processing Algorithms Developed on DSP/FPGA Platform. This paper suggests an automated validation approach in testing advanced digital signal processing algorithms. These algorithms, which are intended for the implementation of the base band processor of Software defined radios, are developed in software (Digital Signal Processors DSP) and hardware (FP(;A) environments in order to meet real-time and offline requirements. The automation of the testing of such algorithms
Disturbance rejection for time-delay systems based on the equivalent-input-disturbance approach This paper presents a disturbance rejection method for time-delay systems. The configuration of the control system is constructed based on the equivalent-input-disturbance (EID) approach. A modified state observer is applied to reconstruct the state of the time-delay plant. A disturbance estimator is designed to actively compensate for the disturbances. Under such a construction of the system, both matched and unmatched disturbances are rejected effectively without requiring any prior knowledge of the disturbance or inverse dynamics of the plant. The presentation of the closed-loop system is derived for the stability analysis and controller design. Simulation results demonstrate the validity and superiority of the proposed method.
A VCO-Based Nonuniform Sampling ADC Using a Slope-Dependent Pulse Generator This paper presents a voltage-controlled oscillator (VCO)-based nonuniform sampling analog-to-digital converter (ADC) as an alternative to the level-crossing (LC)-based converters for digitizing biopotential signals. This work aims to provide a good signal-to-noise-and-distortion ratio at a low average sampling rate. In the proposed conversion method, a slope-dependent pulse generation block is used to provide a variable sample rate adjusted according to the input signal's slope. Simulation results show that the introduced method meets a target reconstruction quality with a sampling rate approaching 92 Sps, while on the same MIT-BIH Arrhythmia N 106 ECG benchmark, the classic LC-based approach requires a sampling rate higher than 500 Sps. The benefits of the proposed method are more remarkable when the input signal is very noisy. The proposed ADC achieves a compression ratio close to 4, but with only 5.4% root-mean-square difference when tested using the MIT-BIH Arrhythmia Database.
1.046685
0.046444
0.04
0.04
0.04
0.02
0.006948
0.002
0.000148
0
0
0
0
0
CONV-SRAM: An Energy-Efficient SRAM With In-Memory Dot-Product Computation for Low-Power Convolutional Neural Networks This paper presents an energy-efficient static random access memory (SRAM) with embedded dot-product computation capability, for binary-weight convolutional neural networks. A 10T bit-cell-based SRAM array is used to store the 1-b filter weights. The array implements dot-product as a weighted average of the bitline voltages, which are proportional to the digital input values. Local integrating analog-to-digital converters compute the digital convolution outputs, corresponding to each filter. We have successfully demonstrated functionality (>98% accuracy) with the 10 000 test images in the MNIST hand-written digit recognition data set, using 6-b inputs/outputs. Compared to conventional full-digital implementations using small bitwidths, we achieve similar or better energy efficiency, by reducing data transfer, due to the highly parallel in-memory analog computations.
A 4-Kb 1-to-8-bit Configurable 6T SRAM-Based Computation-in-Memory Unit-Macro for CNN-Based AI Edge Processors Previous SRAM-based computing-in-memory (SRAM-CIM) macros suffer small read margins for high-precision operations, large cell array area overhead, and limited compatibility with many input and weight configurations. This work presents a 1-to-8-bit configurable SRAM CIM unit-macro using: 1) a hybrid structure combining 6T-SRAM based in-memory binary product-sum (PS) operations with digital near-memory-computing multibit PS accumulation to increase read accuracy and reduce area overhead; 2) column-based place-value-grouped weight mapping and a serial-bit input (SBIN) mapping scheme to facilitate reconfiguration and increase array efficiency under various input and weight configurations; 3) a self-reference multilevel reader (SRMLR) to reduce read-out energy and achieve a sensing margin 2 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\times $ </tex-math></inline-formula> that of the mid-point reference scheme; and 4) an input-aware bitline voltage compensation scheme to ensure successful read operations across various input-weight patterns. A 4-Kb configurable 6T-SRAM CIM unit-macro was fabricated using a 55-nm CMOS process with foundry 6T-SRAM cells. The resulting macro achieved access times of 3.5 ns per cycle (pipeline) and energy efficiency of 0.6–40.2 TOPS/W under binary to 8-b input/8-b weight precision.
Accelerating real-time embedded scene labeling with convolutional networks Today there is a clear trend towards deploying advanced computer vision (CV) systems in a growing number of application scenarios with strong real-time and power constraints. Brain-inspired algorithms capable of achieving record-breaking results combined with embedded vision systems are the best candidate for the future of CV and video systems due to their flexibility and high accuracy in the area of image understanding. In this paper, we present an optimized convolutional network implementation suitable for real-time scene labeling on embedded platforms. We show that our algorithm can achieve up to 96GOp/s, running on the Nvidia Tegra K1 embedded SoC. We present experimental results, compare them to the state-of-the-art, and demonstrate that for scene labeling our approach achieves a 1.5x improvement in throughput when compared to a modern desktop CPU at a power budget of only 11 W.
A CNN Accelerator on FPGA Using Depthwise Separable Convolution. Convolutional neural networks (CNNs) have been widely deployed in the fields of computer vision and pattern recognition because of their high accuracy. However, large convolution operations are computing intensive and often require a powerful computing platform such as a graphics processing unit. This makes it difficult to apply CNNs to portable devices. The state-of-the-art CNNs, such as MobileNe...
O3BNN-R: An Out-of-Order Architecture for High-Performance and Regularized BNN Inference Binarized Neural Networks (BNN), which significantly reduce computational complexity and memory demand, have shown potential in cost- and power-restricted domains, such as IoT and smart edge-devices, where reaching certain accuracy bars is sufficient and real-time is highly desired. In this article, we demonstrate that the highly-condensed BNN model can be shrunk significantly by dynamically pruning irregular redundant edges. Based on two new observations on BNN-specific properties, an out-of-order (OoO) architecture, O3BNN-R, which can curtail edge evaluation in cases where the binary output of a neuron can be determined early at runtime during inference, is proposed. Similar to instruction level parallelism (ILP), fine-grained, irregular, and runtime pruning opportunities are traditionally presumed to be difficult to exploit. To further enhance the pruning opportunities, we conduct an algorithm/architecture co-design approach where we augment the loss function during the training stage with specialized regularization terms favoring edge pruning. We evaluate our design on an embedded FPGA using networks that include VGG-16, AlexNet for ImageNet, and a VGG-like network for Cifar-10. Results show that O3BNN-R without regularization can prune, on average, 30 percent of the operations, without any accuracy loss, bringing 2.2× inference-speedup, and on average 34× energy-efficiency improvement over state-of-the-art BNN implementations on FPGA/GPU/CPU. With regularization at training, the performance is further improved, on average, by 15 percent.
BRein Memory: A Single-Chip Binary/Ternary Reconfigurable in-Memory Deep Neural Network Accelerator Achieving 1.4 TOPS at 0.6 W. A versatile reconfigurable accelerator architecture for binary/ternary deep neural networks is presented. In-memory neural network processing without any external data accesses, sustained by the symmetry and simplicity of the computation of the binary/ternaty neural network, improves the energy efficiency dramatically. The prototype chip is fabricated, and it achieves 1.4 TOPS (tera operations per...
A 7-nm Compute-in-Memory SRAM Macro Supporting Multi-Bit Input, Weight and Output and Achieving 351 TOPS/W and 372.4 GOPS In this work, we present a compute-in-memory (CIM) macro built around a standard two-port compiler macro using foundry 8T bit-cell in 7-nm FinFET technology. The proposed design supports 1024 4 b <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\times $ </tex-math></inline-formula> 4 b multiply-and-accumulate (MAC) computations simultaneously. The 4-bit input is represented by the number of read word-line (RWL) pulses, while the 4-bit weight is realized by charge sharing among binary-weighted computation caps. Each unit of computation cap is formed by the inherent cap of the sense amplifier (SA) inside the 4-bit Flash ADC, which saves area and minimizes kick-back effect. Access time is 5.5 ns with 0.8-V power supply at room temperature. The proposed design achieves energy efficiency of 351 TOPS/W and throughput of 372.4 GOPS. Implications of our design from neural network implementation and accuracy perspectives are also discussed.
Efficient Processing of Deep Neural Networks: A Tutorial and Survey. Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and throughput without...
ImageNet Large Scale Visual Recognition Challenge. The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.
A study of phase noise in CMOS oscillators This paper presents a study of phase noise in two inductorless CMOS oscillators. First-order analysis of a linear oscillatory system leads to a noise shaping function and a new definition of . A linear model of CMOS ring oscillators is used to calculate their phase noise, and three phase noise phenomena, namely, additive noise, high-frequency multiplicative noise, and low-frequency multiplicative noise, are identified and formulated. Based on the same concepts, a CMOS relaxation oscillator is also analyzed. Issues and techniques related to simulation of noise in the time domain are described, and two prototypes fabricated in a 0.5- m CMOS technology are used to investigate the accuracy of the theoretical predictions. Compared with the measured results, the calculated phase noise values of a 2-GHz ring oscillator and a 900-MHz relaxation oscillator at 5 MHz offset have an error of approximately 4 dB. OLTAGE-CONTROLLED oscillators (VCO's) are an integral part of phase-locked loops, clock recovery cir- cuits, and frequency synthesizers. Random fluctuations in the output frequency of VCO's, expressed in terms of jitter and phase noise, have a direct impact on the timing accuracy where phase alignment is required and on the signal-to-noise ratio where frequency translation is performed. In particular, RF oscillators employed in wireless tranceivers must meet stringent phase noise requirements, typically mandating the use of passive LC tanks with a high quality factor . However, the trend toward large-scale integration and low cost makes it desirable to implement oscillators monolithically. The paucity of literature on noise in such oscillators together with a lack of experimental verification of underlying theories has motivated this work. This paper provides a study of phase noise in two induc- torless CMOS VCO's. Following a first-order analysis of a linear oscillatory system and introducing a new definition of , we employ a linearized model of ring oscillators to obtain an estimate of their noise behavior. We also describe the limitations of the model, identify three mechanisms leading to phase noise, and use the same concepts to analyze a CMOS relaxation oscillator. In contrast to previous studies where time-domain jitter has been investigated (1), (2), our analysis is performed in the frequency domain to directly determine the phase noise. Experimental results obtained from a 2-GHz ring oscillator and a 900-MHz relaxation oscillator indicate that, despite many simplifying approximations, lack of accurate MOS models for RF operation, and the use of simple noise
Stability Analysis and Design of Impulsive Control Systems With Time Delay A class of impulsive control systems with time-varying delays is considered. By establishing an impulsive delay differential inequality, we analyze the global exponential stability of the impulsive delay systems and estimate the exponential convergence rate. On the basis of the analysis, a design procedure of impulsive controller is presented. The designed impulsive controller not only can globally exponentially stabilize the time delay systems, but also can control the exponential convergence rate of the systems. Two numerical examples are given to illustrate the effectiveness of the method.
An Electro-Magnetic Energy Harvesting System With 190 nW Idle Mode Power Consumption for a BAW Based Wireless Sensor Node. State-of-the-art wireless sensor nodes are mostly supplied by batteries. Such systems have the disadvantage that they are not maintenance free because of the limited lifetime of batteries. Instead, wireless sensor nodes or related devices can be remotely powered. To increase the operating range and applicability of these remotely powered devices an electro-magnetic energy harvester is developed in a 0.13 mu m low cost CMOS technology. This paper presents an energy harvesting system that converts RF power to DC power to supply wireless sensor nodes, active transmitters or related systems with a power consumption up to the mW range. This energy harvesting system is used to power a wireless sensor node from the 900 MHz RF field. The wireless sensor node includes an on-chip temperature sensor and a bulk acoustic wave (BAW) based transmitter. The BAW resonator reduces the startup time of the transmitter to about 2 mu s which reduces the amount of energy needed in one transmission cycle. The maximum output power of the transmitter is 5.4 dBm. The chip contains an ultra-low-power control unit and consumes only 190 nW in idle mode. The required input power is -19.7 dBm.
An Evaluation of High-Level Mechanistic Core Models Large core counts and complex cache hierarchies are increasing the burden placed on commonly used simulation and modeling techniques. Although analytical models provide fast results, they do not apply to complex, many-core shared-memory systems. In contrast, detailed cycle-level simulation can be accurate but also tends to be slow, which limits the number of configurations that can be evaluated. A middle ground is needed that provides for fast simulation of complex many-core processors while still providing accurate results. In this article, we explore, analyze, and compare the accuracy and simulation speed of high-abstraction core models as a potential solution to slow cycle-level simulation. We describe a number of enhancements to interval simulation to improve its accuracy while maintaining simulation speed. In addition, we introduce the instruction-window centric (IW-centric) core model, a new mechanistic core model that bridges the gap between interval simulation and cycle-accurate simulation by enabling high-speed simulations with higher levels of detail. We also show that using accurate core models like these are important for memory subsystem studies, and that simple, naive models, like a one-IPC core model, can lead to misleading and incorrect results and conclusions in practical design studies. Validation against real hardware shows good accuracy, with an average single-core error of 11.1&percnt; and a maximum of 18.8&percnt; for the IW-centric model with a 1.5× slowdown compared to interval simulation.
A 178.9-dB FoM 128-dB SFDR VCO-Based AFE for ExG Readouts With a Calibration-Free Differential Pulse Code Modulation Technique This article presents a voltage-controlled oscillator (VCO)-based analog front end (AFE) for ExG readout applications with both a wide dynamic range (DR) and high linearity. By using a differential pulse code modulation (DPCM) technique, VCO non-linearity is mitigated by operating the VCO in the small-signal linear regime. To minimize power consumption from the power-hungry gain error calibration,...
1.018512
0.018182
0.018182
0.018182
0.018182
0.014545
0.009091
0.00107
0
0
0
0
0
0
Preserving Differential Privacy and Utility of Non-stationary Data Streams. Data publishing poses many challenges regarding the efforts to preserve data privacy, on one hand, and maintain its high utility, on the other hand. The Privacy Preserving Data Publishing field (PPDP) has emerged as a possible solution to such trade-off, allowing data miners to analyze the published data, while providing a sufficient degree of privacy. Most existing anonymization platforms deal with static and stationary data, which can be scanned at least once before its publishing. More and more real-world applications generate streams of data which can be non-stationary, i.e., subject to a concept drift. In this paper, we introduce MiDiPSA (Microaggregation-based Differential Private Stream Anonymization) algorithm for non-stationary data streams, which aims at satisfying the constraints of k-anonymity, recursive (c, l)-diversity, and differential privacy while minimizing the information loss and the possible disclosure risk. The algorithm is implemented via four main steps: incremental clustering of the incoming tuples; incremental aggregation of the tuples in each cluster according to a pre-defined aggregation function; monitoring of the stream in order to detect possible concept drifts using a non-parametric Kolmogorov-Smirnov statistical test; and incremental publishing of anonymized tuples. Whenever a concept drift is detected, the clustering system is updated to reflect the current changes in the stream, without affecting the publishing process. In our empirical evaluation, we analyze the performance of various data stream classifiers on the anonymized data and compare it to their performance on the original data. We conduct experiments with seven benchmark data streams and show that our algorithm preserves privacy while providing higher utility, in comparison with other state-of-the-art anonymization algorithms.
Automated text mining for requirements analysis of policy documents Businesses and organizations in jurisdictions around the world are required by law to provide their customers and users with information about their business practices in the form of policy documents. Requirements engineers analyze these documents as sources of requirements, but this analysis is a time-consuming and mostly manual process. Moreover, policy documents contain legalese and present readability challenges to requirements engineers seeking to analyze them. In this paper, we perform a large-scale analysis of 2,061 policy documents, including policy documents from the Google Top 1000 most visited websites and the Fortune 500 companies, for three purposes: (1) to assess the readability of these policy documents for requirements engineers; (2) to determine if automated text mining can indicate whether a policy document contains requirements expressed as either privacy protections or vulnerabilities; and (3) to establish the generalizability of prior work in the identification of privacy protections and vulnerabilities from privacy policies to other policy documents. Our results suggest that this requirements analysis technique, developed on a small set of policy documents in two domains, may generalize to other domains.
Crowdsensing in Smart Cities: Overview, Platforms, and Environment Sensing Issues. Evidence shows that Smart Cities are starting to materialise in our lives through the gradual introduction of the Internet of Things (IoT) paradigm. In this scope, crowdsensing emerges as a powerful solution to address environmental monitoring, allowing to control air pollution levels in crowded urban areas in a distributed, collaborative, inexpensive and accurate manner. However, even though technology is already available, such environmental sensing devices have not yet reached consumers. In this paper, we present an analysis of candidate technologies for crowdsensing architectures, along with the requirements for empowering users with air monitoring capabilities. Specifically, we start by providing an overview of the most relevant IoT architectures and protocols. Then, we present the general design of an off-the-shelf mobile environmental sensor able to cope with air quality monitoring requirements; we explore different hardware options to develop the desired sensing unit using readily available devices, discussing the main technical issues associated with each option, thereby opening new opportunities in terms of environmental monitoring programs.
A Decentralized Approach for Resource Discovery using Metadata Replication in Edge Networks Recent advancements in distributed systems have enabled deploying low-latency edge applications (i.e., IoT applications) in proximity to the end-users, respectively, in edge networks. The stringent requirements combined with heterogeneous, resource-constrained and dynamic edge networks make the deployment process a challenging task. Besides that, the lack of resource discovery features make it particularly difficult to fully exploit available resources (i.e., computational, storage, and IoT resources) provided by low-powered edge devices. To that end, this article proposes a decentralized resource discovery mechanism that enables discovering resources in an automatic manner in edge networks. Through replicating resource descriptions (i.e., metadata), edge devices exchange information about available resources within their scope in a peer-to-peer manner. To handle the resource discovery complexity, we propose a solution to built edge networks as a flat model and enable edge devices to be organized in clusters. Our approach supports the system in coping with the dynamicity and uncertainty of edge networks. We discuss the architecture, processes of the approach, and the experiments we conducted on a testbed to validate its feasibility on resource-constrained edge networks.
Anonymizing Sensor Data on the Edge: A Representation Learning and Transformation Approach AbstractThe abundance of data collected by sensors in Internet of Things devices and the success of deep neural networks in uncovering hidden patterns in time series data have led to mounting privacy concerns. This is because private and sensitive information can be potentially learned from sensor data by applications that have access to this data. In this article, we aim to examine the tradeoff between utility and privacy loss by learning low-dimensional representations that are useful for data obfuscation. We propose deterministic and probabilistic transformations in the latent space of a variational autoencoder to synthesize time series data such that intrusive inferences are prevented while desired inferences can still be made with sufficient accuracy. In the deterministic case, we use a linear transformation to move the representation of input data in the latent space such that the reconstructed data is likely to have the same public attribute but a different private attribute than the original input data. In the probabilistic case, we apply the linear transformation to the latent representation of input data with some probability. We compare our technique with autoencoder-based anonymization techniques and additionally show that it can anonymize data in real time on resource-constrained edge devices.
Impossibility of distributed consensus with one faulty process The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the “Byzantine Generals” problem.
Ad-hoc On-Demand Distance Vector Routing This paper describes work carried out as part of the GUIDE project at Lancaster University. The overall aim of the project is to develop a context-sensitive tourist guide for visitors to the city of Lancaster. Visitors are equipped with portable GUIDE ...
A Linear Representation of Dynamics of Boolean Networks A new matrix product, called semi-tensor product of matrices, is reviewed. Using it, a matrix expression of logic is proposed, where a logical variable is expressed as a vector, a logical function is expressed as a multiple linear mapping. Under this framework, a Boolean network equation is converted into an equivalent algebraic form as a conventional discrete-time linear system. Analyzing the transition matrix of the linear system, formulas are obtained to show a) the number of fixed points; b) the numbers of cycles of different lengths; c) transient period, for all points to enter the set of attractors; and d) basin of each attractor. The corresponding algorithms are developed and used to some examples.
From few to many: illumination cone models for face recognition under variable lighting and pose We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render驴or synthesize驴images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone (based on Euclidean distance within the image space). We test our face recognition method on 4,050 images from the Yale Face Database B; these images contain 405 viewing conditions (9 poses $\times$ 45 illumination conditions) for 10 individuals. The method performs almost without error, except on the most extreme lighting directions, and significantly outperforms popular recognition methods that do not use a generative model.
A world survey of artificial brain projects, Part I: Large-scale brain simulations Driven by rapid ongoing advances in computer hardware, neuroscience and computer science, Artificial Brain research and development are blossoming. This article constitutes the first half of a two-part world survey of artificial brain projects: this part dealing with large-scale brain simulations, and the second part with biologically inspired cognitive architectures (BICAs). The large-scale brain simulations we consider in depth here include those by Markram, Modha, Boahen, Horwitz, Edelman, Izhikevich, and Just. As well as reviewing the particulars of these simulation projects, we position them in a broader perspective, comparing at the different underlying definitions of the concept of ''simulation,'' noting that in many ways the projects are modeling neurosystems at different levels as well as using different methodologies.
H∞ control for sampled-data nonlinear systems described by Takagi–Sugeno fuzzy systems In this paper we consider the design problem of output feedback H∞ controllers for sampled-data fuzzy systems. We first transfer them into equivalent jump fuzzy systems. We establish the so-called Bounded Real Lemma for jump fuzzy systems and give a design method of γ-suboptimal output feedback H∞ controllers in terms of two Riccati inequalities with jumps. We then apply the main results to the sampled-data fuzzy systems and obtain a design method of γ-suboptimal output feedback H∞ controllers. We give a numerical example and construct a γ-suboptimal output feedback H∞ controller.
Recurrent-Fuzzy-Neural-Network-Controlled Linear Induction Motor Servo Drive Using Genetic Algorithms A recurrent fuzzy neural network (RFNN) controller based on real-time genetic algorithms (GAs) is developed for a linear induction motor (LIM) servo drive in this paper. First, the dynamic model of an indirect field-oriented LIM servo drive is derived. Then, an online training RFNN with a backpropagation algorithm is introduced as the tracking controller. Moreover, to guarantee the global convergence of tracking error, a real-time GA is developed to search the optimal learning rates of the RFNN online. The GA-based RFNN control system is proposed to control the mover of the LIM for periodic motion. The theoretical analyses for the proposed GA-based RFNN controller are described in detail. Finally, simulated and experimental results show that the proposed controller provides high-performance dynamic characteristics and is robust with regard to plant parameter variations and external load disturbance
Variable Off-Time Control Loop for Current-Mode Floating Buck Converters in LED Driving Applications A versatile controller architecture, used in current-mode floating buck converters for LED driving, is developed. State-of-the-art controllers rely on a fixed switching period and variable duty cycle, focusing on current averaging circuits. Instead, the proposed controller architecture is based on fixed peak current and adaptable off time as the average current control method. The control loop is comprised of an averaging block, transconductance amplifier, and an innovative time modulator. This modulator is intended to provide constant control loop response regardless of input voltage, current storage inductor, and number of LEDs in order to improve converter applicability for LED drivers. Fabricated in a 5 V standard 0.5 μm CMOS technology, the prototype controller is implemented and tested in a current-mode floating buck converter. The converter exhibits sound continuous conduction mode (CCM) operation for input voltages between 11 and 20 V, and a wide inductor range of 100-1000 μH. In all instances, the measured average LED current variation was lower than 10% of the desired value. A maximum conversion efficiency of 91% is obtained when driving 50 mA through four LEDs (with 14 V input voltage and an inductor of 470 μH). A stable CCM converter operation is also proven by simulation for nine LEDs and 45 V input voltage.
A VCO-Based Nonuniform Sampling ADC Using a Slope-Dependent Pulse Generator This paper presents a voltage-controlled oscillator (VCO)-based nonuniform sampling analog-to-digital converter (ADC) as an alternative to the level-crossing (LC)-based converters for digitizing biopotential signals. This work aims to provide a good signal-to-noise-and-distortion ratio at a low average sampling rate. In the proposed conversion method, a slope-dependent pulse generation block is used to provide a variable sample rate adjusted according to the input signal's slope. Simulation results show that the introduced method meets a target reconstruction quality with a sampling rate approaching 92 Sps, while on the same MIT-BIH Arrhythmia N 106 ECG benchmark, the classic LC-based approach requires a sampling rate higher than 500 Sps. The benefits of the proposed method are more remarkable when the input signal is very noisy. The proposed ADC achieves a compression ratio close to 4, but with only 5.4% root-mean-square difference when tested using the MIT-BIH Arrhythmia Database.
1.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
0
Computation Reuse in DNNs by Exploiting Input Similarity. In recent years, Deep Neural Networks (DNNs) have achieved tremendous success for diverse problems such as classification and decision making. Efficient support for DNNs on CPUs, GPUs and accelerators has become a prolific area of research, resulting in a plethora of techniques for energy-efficient DNN inference. However, previous proposals focus on a single execution of a DNN. Popular applications, such as speech recognition or video classification, require multiple back-to-back executions of a DNN to process a sequence of inputs (e.g., audio frames, images). In this paper, we show that consecutive inputs exhibit a high degree of similarity, causing the inputs/outputs of the different layers to be extremely similar for successive frames of speech or images of a video. Based on this observation, we propose a technique to reuse some results of the previous execution, instead of computing the entire DNN. Computations related to inputs with negligible changes can be avoided with minor impact on accuracy, saving a large percentage of computations and memory accesses. We propose an implementation of our reuse-based inference scheme on top of a state-of-the-art DNN accelerator. Results show that, on average, more than 60% of the inputs of any neural network layer tested exhibit negligible changes with respect to the previous execution. Avoiding the memory accesses and computations for these inputs results in 63% energy savings on average.
An Energy-Efficient FPGA-Based Deconvolutional Neural Networks Accelerator for Single Image Super-Resolution Convolutional neural networks (CNNs) demonstrate excellent performance in various computer vision applications. In recent years, FPGA-based CNN accelerators have been proposed for optimizing performance and power efficiency. Most accelerators are designed for object detection and recognition algorithms that are performed on low-resolution (LR) images. However, real-time image super-resolution (SR) cannot be implemented on a typical accelerator because of the long execution cycles required to generate high-resolution (HR) images, such as those used in ultra-high-definition (UHD) systems. In this paper, we propose a novel CNN accelerator with efficient parallelization methods for SR applications. First, we propose a new methodology for optimizing the deconvolutional neural networks (DCNNs) used for increasing feature maps. Secondly, we propose a novel method to optimize CNN dataflow so that the SR algorithm can be driven at low power in display applications. Finally, we quantize and compress a DCNN-based SR algorithm into an optimal model for efficient inference using on-chip memory. We present an energyefficient architecture for SR and validate our architecture on a mobile panel with quad-high-definition (QHD) resolution. Our experimental results show that, with the same hardware resources, the proposed DCNN accelerator achieves a throughput up to 108 times greater than that of a conventional DCNN accelerator. In addition, our SR system achieves an energy efficiency of 144.9 GOPS/W, 293.0 GOPS/W, and 500.2 GOPS/W at SR scale factors of 2, 3, and 4, respectively. Furthermore, we demonstrate that our system can restore HR images to a high quality while greatly reducing the data bit-width and the number of parameters compared to conventional SR algorithms.
A High-Throughput and Power-Efficient FPGA Implementation of YOLO CNN for Object Detection Convolutional neural networks (CNNs) require numerous computations and external memory accesses. Frequent accesses to off-chip memory cause slow processing and large power dissipation. For real-time object detection with high throughput and power efficiency, this paper presents a Tera-OPS streaming hardware accelerator implementing a you-only-look-once (YOLO) CNN. The parameters of the YOLO CNN are retrained and quantized with the PASCAL VOC data set using binary weight and flexible low-bit activation. The binary weight enables storing the entire network model in block RAMs of a field-programmable gate array (FPGA) to reduce off-chip accesses aggressively and, thereby, achieve significant performance enhancement. In the proposed design, all convolutional layers are fully pipelined for enhanced hardware utilization. The input image is delivered to the accelerator line-by-line. Similarly, the output from the previous layer is transmitted to the next layer line-by-line. The intermediate data are fully reused across layers, thereby eliminating external memory accesses. The decreased dynamic random access memory (DRAM) accesses reduce DRAM power consumption. Furthermore, as the convolutional layers are fully parameterized, it is easy to scale up the network. In this streaming design, each convolution layer is mapped to a dedicated hardware block. Therefore, it outperforms the “one-size-fits-all” designs in both performance and power efficiency. This CNN implemented using VC707 FPGA achieves a throughput of 1.877 tera operations per second (TOPS) at 200 MHz with batch processing while consuming 18.29 W of on-chip power, which shows the best power efficiency compared with the previous research. As for object detection accuracy, it achieves a mean average precision (mAP) of 64.16% for the PASCAL VOC 2007 data set that is only 2.63% lower than the mAP of the same YOLO network with full precision.
WinoNN: Optimizing FPGA-Based Convolutional Neural Network Accelerators Using Sparse Winograd Algorithm In recent years, a variety of accelerators on FPGAs have been proposed to speed up the convolutional neural network (CNN) in many domain-specific application fields. Besides, some optimization algorithms, such as fast algorithms and network sparsity, have greatly reduced the theoretical computational workload of CNN inference. There are currently a few accelerators on FPGAs that support both the fast Winograd algorithm (WinoA) and network sparsity to minimize the amount of computation. However, on the one hand, these architectures feed data into processing elements (PEs) in units of blocks, some boundary losses caused by sparse irregularities cannot be avoided. On the other hand, these works have not discussed the design space exploration under the sparse condition. In this article, we propose a novel accelerator called WINONN. We fully discuss the challenges faced by supporting WinoA, weight sparsity, and activation sparsity simultaneously. To minimize the online encoding overhead caused by activation sparsity, an efficient encoding format called multibit mask (MBM) is proposed. To handle the irregularities of sparse data, we proposed a novel Scatter-Compute-Gather method in hardware design, combined with a freely sliding buffer to achieve fine-grained data loading to minimize the boundary waste. Finally, we combine a theoretical analysis and experimental method to explore the design space, allowing WINONN to get the best performance on a specific FPGA. Our high scalability design enables us to deploy sparse Winograd accelerators on very small embedded FPGAs, which is not supported in previous works. The experimental results on VGG16 show that we achieve the highest digital signal processing unit (DSP) efficiency and highest energy efficiency compared with the state-of-the-art sparse architectures.
Bit fusion: bit-level dynamically composable architecture for accelerating deep neural networks Hardware acceleration of Deep Neural Networks (DNNs) aims to tame their enormous compute intensity. Fully realizing the potential of acceleration in this domain requires understanding and leveraging algorithmic properties of DNNs. This paper builds upon the algorithmic insight that bitwidth of operations in DNNs can be reduced without compromising their classification accuracy. However, to prevent loss of accuracy, the bitwidth varies significantly across DNNs and it may even be adjusted for each layer individually. Thus, a fixed-bitwidth accelerator would either offer limited benefits to accommodate the worst-case bitwidth requirements, or inevitably lead to a degradation in final accuracy. To alleviate these deficiencies, this work introduces dynamic bit-level fusion/decomposition as a new dimension in the design of DNN accelerators. We explore this dimension by designing Bit Fusion, a bit-flexible accelerator, that constitutes an array of bit-level processing elements that dynamically fuse to match the bitwidth of individual DNN layers. This flexibility in the architecture enables minimizing the computation and the communication at the finest granularity possible with no loss in accuracy. We evaluate the benefits of Bit Fusion using eight real-world feed-forward and recurrent DNNs. The proposed microarchitecture is implemented in Verilog and synthesized in 45 nm technology. Using the synthesis results and cycle accurate simulation, we compare the benefits of Bit Fusion to two state-of-the-art DNN accelerators, Eyeriss [1] and Stripes [2]. In the same area, frequency, and process technology, Bit Fusion offers 3.9X speedup and 5.1X energy savings over Eyeriss. Compared to Stripes, Bit Fusion provides 2.6X speedup and 3.9X energy reduction at 45 nm node when Bit Fusion area and frequency are set to those of Stripes. Scaling to GPU technology node of 16 nm, Bit Fusion almost matches the performance of a 250-Watt Titan Xp, which uses 8-bit vector instructions, while Bit Fusion merely consumes 895 milliwatts of power.
Ramulator: A Fast and Extensible DRAM Simulator Recently, both industry and academia have proposed many different roadmaps for the future of DRAM. Consequently, there is a growing need for an extensible DRAM simulator, which can be easily modified to judge the merits of today’s DRAM standards as well as those of tomorrow. In this paper, we present Ramulator, a fast and cycle-accurate DRAM simulator that is built from the ground up for extensibility. Unlike existing simulators, Ramulator is based on a generalized template for modeling a DRAM system, which is only later infused with the specific details of a DRAM standard. Thanks to such a decoupled and modular design, Ramulator is able to provide out-of-the-box support for a wide array of DRAM standards: DDR3/4, LPDDR3/4, GDDR5, WIO1/2, HBM, as well as some academic proposals (SALP, AL-DRAM, TLDRAM, RowClone, and SARP). Importantly, Ramulator does not sacrifice simulation speed to gain extensibility: according to our evaluations, Ramulator is 2.5 faster than the next fastest simulator. Ramulator is released under the permissive BSD license.
Area Efficient ROM-Embedded SRAM Cache There are many important applications, such as math function evaluation, digital signal processing, and built-in self-test, whose implementations can be faster and simpler if we can have large on-chip “tables” stored as read-only memories (ROMs). We show that conventional de facto standard 6T and 8T static random access memory (SRAM) bit cells can embed ROM data without area overhead or performance degradation on the bit cells. Just by adding an extra wordline (WL) and connecting the WL to selected access transistor of the bit cell (based on whether a 0 or 1 is to be stored as ROM data in that location), the bit cell can work both in the SRAM mode and in the ROM mode. In the proposed ROM-embedded SRAM, during SRAM operations, ROM data is not available. To retrieve the ROM data, special write steps associated with proper via connections load ROM data into the SRAM array. The ROM data is read by conventional load instruction with unique virtual address space assigned to the data. This allows the ROM-embedded cache (R-cache) to bypass tag arrays and translation look-aside buffers, leading to fast ROM operations. We show example applications to illustrate how the R-cache can lead to low-cost logic testing and faster evaluation of mathematical functions.
Long short-term memory. Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.
ADRES: An Architecture with Tightly Coupled VLIW Processor and Coarse-Grained Reconfigurable Matrix The coarse-grained reconfigurable architectures have advantages over the traditional FPGAs in terms of delay, area and configuration time. To execute entire applications, most of them combine an instruction set processor(ISP) and a reconfigurable matrix. However, not much attention is paid to the integration of these two parts, which results in high communication overhead and programming difficulty. To address this problem, we propose a novel architecture with tightly coupled very long instruction word (VLIW) processor and coarse-grained reconfigurable matrix. The advantages include simplified programming model, shared resource costs, and reduced communication overhead. To exploit this architecture, our previously developed compiler framework is adapted to the new architecture. The results show that the new architecture has good performance and is very compiler-friendly.
Cache attacks and countermeasures: the case of AES We describe several software side-channel attacks based on inter-process leakage through the state of the CPU’s memory cache. This leakage reveals memory access patterns, which can be used for cryptanalysis of cryptographic primitives that employ data-dependent table lookups. The attacks allow an unprivileged process to attack other processes running in parallel on the same processor, despite partitioning methods such as memory protection, sandboxing and virtualization. Some of our methods require only the ability to trigger services that perform encryption or MAC using the unknown key, such as encrypted disk partitions or secure network links. Moreover, we demonstrate an extremely strong type of attack, which requires knowledge of neither the specific plaintexts nor ciphertexts, and works by merely monitoring the effect of the cryptographic process on the cache. We discuss in detail several such attacks on AES, and experimentally demonstrate their applicability to real systems, such as OpenSSL and Linux’s dm-crypt encrypted partitions (in the latter case, the full key can be recovered after just 800 writes to the partition, taking 65 milliseconds). Finally, we describe several countermeasures for mitigating such attacks.
A 2.4-Ghz Rf Sampling Receiver Front-End In 0.18-Mu M Cmos This paper presents an integrable RF sampling receiver front-end architecture, based on a switched-capacitor (SC) RF sampling downconversion (RFSD) filter, for WLAN applications in a 2.4-GHz band. The RFSD filter test chip is fabricated in a 0.18-mu m CMOS technology and the measurement results show a successful realization of RF sampling, quadrature downconversion, tunable anti-alias filtering, downconversion to baseband, and decimation of the sampling rate. By changing the input sampling rate, the RFSD filter can be tuned to different RF channels. A maximum input sampling rate of 1072 MS/s has been achieved. A single-phase clock is used for the quadrature downconversion and the bandpass operation is realized by a 23-tap FIR filter. The RFSD filter has an IIP3 of +5.5 dBm, a gain of -1 dB, and more than 17 dB rejection of alias bands. The measured image rejection is 59 dB and the sampling clock jitter is 0.64 ps. The test chip consumes 47 mW in the analog part and 40 mW in the digital part. It occupies an area of 1 mm(2).
A looped-functional approach for robust stability analysis of linear impulsive systems A new functional-based approach is developed for the stability analysis of linear impulsive systems. The new method, which introduces looped functionals, considers non-monotonic Lyapunov functions and leads to LMI conditions devoid of exponential terms. This allows one to easily formulate dwell-time results, for both certain and uncertain systems. It is also shown that this approach may be applied to a wider class of impulsive systems than existing methods. Some examples, notably on sampled-data systems, illustrate the efficiency of the approach.
Closed-Loop Control Of Dead Time Systems Via Sequential Sub-Predictors This article presents a method to control and stabilise systems with pure input lag. The approach is based on a new state predictor which estimates the future of states and guarantees that the prediction error converges asymptotically to zero. The state feedback controller is then designed based on this predictor. Furthermore, a sequential structure of sub-predictors is presented for unstable systems with a long time-delay and accordingly the controller is designed for asymptotic stability. The core idea is to design a series of coupled predictors, each of which is responsible for the prediction of one small portion of the delay, such that the predictors collectively predict the states for a long time-delay. Moreover, sequential sub-predictor method is used for robust control of dead time systems in presence of uncertainty. Simulation examples are presented to verify the proposed method.
Multi-Channel Neural Recording Implants: A Review. The recently growing progress in neuroscience research and relevant achievements, as well as advancements in the fabrication process, have increased the demand for neural interfacing systems. Brain-machine interfaces (BMIs) have been revealed to be a promising method for the diagnosis and treatment of neurological disorders and the restoration of sensory and motor function. Neural recording implants, as a part of BMI, are capable of capturing brain signals, and amplifying, digitizing, and transferring them outside of the body with a transmitter. The main challenges of designing such implants are minimizing power consumption and the silicon area. In this paper, multi-channel neural recording implants are surveyed. After presenting various neural-signal features, we investigate main available neural recording circuit and system architectures. The fundamental blocks of available architectures, such as neural amplifiers, analog to digital converters (ADCs) and compression blocks, are explored. We cover the various topologies of neural amplifiers, provide a comparison, and probe their design challenges. To achieve a relatively high SNR at the output of the neural amplifier, noise reduction techniques are discussed. Also, to transfer neural signals outside of the body, they are digitized using data converters, then in most cases, the data compression is applied to mitigate power consumption. We present the various dedicated ADC structures, as well as an overview of main data compression methods.
1.070612
0.066667
0.066667
0.033333
0.016387
0.004833
0.002074
0.000828
0.000115
0
0
0
0
0
A CMOS MedRadio Transceiver With Supply-Modulated Power Saving Technique for an Implantable Brain–Machine Interface System A MedRadio 413–419-MHz inductorless transceiver (TRX) for an implantable brain–machine interface (BMI) in a 180 nm-CMOS process is presented. Occupying 5.29 mm <sup xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> of die area (including pad ring), this on–off keying (OOK) TRX employs a non-coherent direct-detection receiver (RX), which exhibits a measured in-band noise figure (NF) of 4.9 dB and <inline-formula xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$S_{11}$ </tex-math></inline-formula> of −13.5 dB. An event-driven supply modulation (EDSM) technique is introduced to dramatically lower the RX power consumption. Incorporating an adaptive feedback loop, this RX consumes 42-/92- <inline-formula xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> power from 1.8-V supply at 1/10-kbps data rates, achieving −79/−74-dBm sensitivities for 0.1% bit error rate (BER). The TX employs a current starved ring oscillator with an automatic frequency calibration loop, covering 9% supply voltage variation and 15 °C–78 °C temperature range that guarantees operation within the emission mask. The direct-modulation TX achieves 14% efficiency for a random OOK data sequence at −4-dBm output power. Wireless testing over a 350-cm distance accounting for bio-signal data transfer, multi-user coexistence, and <italic xmlns:xlink="http://www.w3.org/1999/xlink">in vitro</italic> phantom measurement results is demonstrated.
IIP3 Enhancement of Subthreshold Active Mixers This brief presents a modified subthreshold Gilbert mixer that includes an inductor between the drain of the radio frequency (RF) transistor and the sources of the LO transistors, an inductive source degeneration for the RF transistor, and a cross-coupling capacitor between the source of the RF transistor and the drain of the RF transistor in the other mixer branch to improve third-order distortion characteristics. This linearization technique enables a third-order intermodulation intercept point (IIP3) improvement of at least 10 dB compared to other subthreshold mixers. A 2.4-GHz mixer was designed and simulated using 0.11- μm CMOS technology. In the typical corner case of the postlayout simulations, the linearized mixer achieves 6.7-dBm IIP3, 8.6-dB voltage gain, and 19.2-dB single-sideband noise figure with a low power consumption of 0.423 mW.
A Low-IF/Zero-IF Reconfigurable Analog Baseband IC With an I/Q Imbalance Cancellation Scheme A low-IF/zero-IF reconfigurable analog baseband IC embodying an automatic I/Q imbalance cancellation scheme is reported. The chip, which comprises a down-conversion mixer, an analog baseband filter, and a programmable gain amplifier, achieves a high image rejection of 55 dB without any calibration. It operates over a wide radio frequency range of 0.4-2.4 GHz, and has a cut-off frequency range of 0.3-30 MHz in zero-intermediate frequency (IF) mode and an IF range of 0.2-6 MHz in low-IF mode. The circuit in the receiver chain draws only 4.5-6.2 mA, and the clock generator including LO buffers draws 1.8-6.3 mA from a 1.2-V supply. The chip, implemented in 90-nm CMOS technology, occupies an area of 1.1 .
A 1.9mW 750kb/s 2.4GHz F-OOK transmitter with symmetric FM template and high-point modulation PLL. This paper describes a frequency-domain on-off keying (F-OOK) modulation method which utilizes power detection of both a carrier and a sideband by modulating the carrier frequency with a pre-selected modulation template. Compared to on-off keying and binary frequency-shift keying modulations, more efficient bandwidth control is achieved for the same data rate with a symmetric FM template. Since th...
Analysis and Demonstration of an IIP3 Improvement Technique for Low-Power RF Low-Noise Amplifiers. This paper describes a linearization method to enhance the third-order distortion performance of a subthreshold common-source cascode low-noise amplifier (LNA) without extra power consumption by using passive components. An inductor between the gate of the cascode transistor and the power supply in combination with a digitally programmable capacitor between the gate and the drain of the cascode tr...
Novel Signal Processing Technique for Capture and Isolation of Blink-Related Oscillations Using a Low-Density Electrode Array for Bedside Evaluation of Consciousness. Objective: Blink-related oscillations derived from electroencephalography (EEG) have recently emerged as an important measure of awareness. Combined with portable EEG hardware with low-density electrode arrays, this neural marker may crucially augment the existing bedside assessments of consciousness in unresponsive patients. Nonetheless, the close relationship between signal characteristics of th...
Evolver: A Deep Learning Processor With On-Device Quantization–Voltage–Frequency Tuning When deploying deep neural networks (DNNs) onto deep learning processors, we usually exploit mixed-precision quantization and voltage-frequency scaling to make tradeoffs among accuracy, latency, and energy. Conventional methods usually determine the quantization-voltage-frequency (QVF) policy before DNNs are deployed onto local devices. However, they are difficult to make optimal customizations for local user scenarios. In this article, we solve the problem by enabling on-device QVF tuning with a new deep learning processor architecture Evolver. Evolver has a QVF tuning mode to deploy DNNs with local customizations before normal execution. In this mode, Evolver uses reinforcement learning to search the optimal QVF policy based on direct hardware feedbacks from the chip itself. After that, Evolver runs the newly quantized DNN inference under the searched voltage and frequency. To improve the performance and energy efficiency of both training and inference, we introduce bidirectional speculation and runtime reconfiguration techniques into the architecture. To the best of our knowledge, Evolver is the first deep learning processor that utilizes on-device QVF tuning to achieve both customized and optimal DNN deployment.
Wireless sensor networks: a survey This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed.
Reaching Agreement in the Presence of Faults The problem addressed here concerns a set of isolated processors, some unknown subset of which may be faulty, that communicate only by means of two-party messages. Each nonfaulty processor has a private value of information that must be communicated to each other nonfaulty processor. Nonfaulty processors always communicate honestly, whereas faulty processors may lie. The problem is to devise an algorithm in which processors communicate their own values and relay values received from others that allows each nonfaulty processor to infer a value for each other processor. The value inferred for a nonfaulty processor must be that processor's private value, and the value inferred for a faulty one must be consistent with the corresponding value inferred by each other nonfaulty processor.It is shown that the problem is solvable for, and only for, n ≥ 3m + 1, where m is the number of faulty processors and n is the total number. It is also shown that if faulty processors can refuse to pass on information but cannot falsely relay information, the problem is solvable for arbitrary n ≥ m ≥ 0. This weaker assumption can be approximated in practice using cryptographic methods.
Exploring an unknown graph It is desired to explore all edges of an unknown directed, strongly connected graph. At each point one has a map of all nodes and edges visited, one can recognize these nodes and edges upon seeing them again, and it is known how many unexplored edges emanate from each node visited. The goal is to minimize the ratio of the total number of edges traversed to the optimum number of traversals had the graph been known. For Eulerian graphs this ratio cannot be better than 2, and 2 is achievable by a simple algorithm. In contrast, the ratio is unbounded when the deficiency of the graph (the number of edges that have to be added to make it Eulerian) is unbounded. The main result is an algorithm that achieves a bounded ratio when the deficiency is bounded; unfortunately the ratio is exponential in the deficiency. It is also shown that, when partial information about the graph is available, minimizing the worst-case ratio is PSPACE-complete.
A framework for security on NoC technologies Multiple heterogeneous processor cores, memory cores and application specific IP cores integrated in a communication network, also known as networks on chips (NoCs), will handle a large number of applications including security. Although NoCs offer more resistance to bus probing attacks, power/EM attacks and network snooping attacks are relevant. For the first time, a framework for security on NoC at both the network level (or transport layer) and at the core level (or application layer) is proposed. At the network level, each IP core has a security wrapper and a key-keeper core is included in the NoC, protecting encrypted private and public keys. Using this framework, unencrypted keys are prevented from leaving the cores and NoC. This is crucial to prevent untrusted software on or off the NoC from gaining access to keys. At the core level (application layer) the security framework is illustrated with software modification for resistance against power attacks with extremely low overheads in energy. With the emergence of secure IP cores in the market and nanometer technologies, a security framework for designing NoCs is crucial for supporting future wireless Internet enabled devices.
Extermal cover times for random walks on trees
PUMP: a programmable unit for metadata processing We introduce the Programmable Unit for Metadata Processing (PUMP), a novel software-hardware element that allows flexible computation with uninterpreted metadata alongside the main computation with modest impact on runtime performance (typically 10--40% for single policies, compared to metadata-free computation on 28 SPEC CPU2006 C, C++, and Fortran programs). While a host of prior work has illustrated the value of ad hoc metadata processing for specific policies, we introduce an architectural model for extensible, programmable metadata processing that can handle arbitrary metadata and arbitrary sets of software-defined rules in the spirit of the time-honored 0-1-∞ rule. Our results show that we can match or exceed the performance of dedicated hardware solutions that use metadata to enforce a single policy, while adding the ability to enforce multiple policies simultaneously and achieving flexibility comparable to software solutions for metadata processing. We demonstrate the PUMP by using it to support four diverse safety and security policies---spatial and temporal memory safety, code and data taint tracking, control-flow integrity including return-oriented-programming protection, and instruction/data separation---and quantify the performance they achieve, both singly and in combination.
A Hybrid 1<sup>st</sup>/2<sup>nd</sup>-Order VCO-Based CTDSM With Rail-to-Rail Artifact Tolerance for Bidirectional Neural Interface Bi-directional brain machine interfaces enable simultaneous brain activity monitoring and neural modulation. However stimulation artifact can saturate instrumentation front-end, while concurrent on-site recording is needed. This brief presents a voltage controlled-oscillator (VCO) based continuous-time <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\rm \Delta \Sigma $ </tex-math></inline-formula> modulator (CTDSM) with rail-to-rail input range and fast artifact tracking. A hybrid <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}/2^{nd}$ </tex-math></inline-formula> -order loop is designed to achieve high dynamic range (DR) and large input range. Stimulation artifact is detected by a phase counter and compensated by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}$ </tex-math></inline-formula> -order loop. The residue signal is digitized by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2^{nd}$ </tex-math></inline-formula> -order loop for high precision. Redundancy between the two loops is implemented as feedback capacitors elements with non-binary ratio to guarantee feedback stability and linearity. Fabricated in a 55-nm CMOS process, the prototype achieves 65.7dB SNDR across 10 kHz bandwidth with a full scale of 200 mV <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">pp</sub> . And a ±1.2 V input range is achieved to suppress artifacts. Saline based experiment with simultaneous stimulation and recording demonstrates that the implemented system can track and tolerate rail-to-rail stimulation artifact within 30 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{s}$ </tex-math></inline-formula> while small neural signal can be continuously monitored.
1.1
0.1
0.1
0.1
0.1
0.1
0.02
0
0
0
0
0
0
0
Communications and Control for Wireless Drone-Based Antenna Array In this paper, the effective use of multiple quadrotor drones as an aerial antenna array that provides wireless service to ground users is investigated. In particular, under the goal of minimizing the airborne service time needed for communicating with ground users, a novel framework for deploying and operating a drone-based antenna array system whose elements are single-antenna drones is proposed...
Kademlia: A Peer-to-Peer Information System Based on the XOR Metric We describe a peer-to-peer distributed hash table with provable consistency and performance in a fault-prone environment. Our system routes queries and locates nodes using a novel XOR-based metric topology that simplifies the algorithm and facilitates our proof. The topology has the property that every message exchanged conveys or reinforces useful contact information. The system exploits this information to send parallel, asynchronous query messages that tolerate node failures without imposing timeout delays on users.
A Dht-Based Discovery Service For The Internet Of Things Current trends towards the Future Internet are envisaging the conception of novel services endowed with context-aware and autonomic capabilities to improve end users' quality of life. The Internet of Things paradigm is expected to contribute towards this ambitious vision by proposing models and mechanisms enabling the creation of networks of "smart things" on a large scale. It is widely recognized that efficient mechanisms for discovering available resources and capabilities are required to realize such vision. The contribution of this work consists in a novel discovery service for the Internet of Things. The proposed solution adopts a peer-to-peer approach for guaranteeing scalability, robustness, and easy maintenance of the overall system. While most existing peer-to-peer discovery services proposed for the IoT support solely exact match queries on a single attribute (i. e., the object identifier), our solution can handle multiattribute and range queries. We defined a layered approach by distinguishing three main aspects: multiattribute indexing, range query support, peer-to-peer routing. We chose to adopt an over-DHT indexing scheme to guarantee ease of design and implementation principles. We report on the implementation of a Proof of Concept in a dangerous goods monitoring scenario, and, finally, we discuss test results for structural properties and query performance evaluation.
OpenIoT: An open service framework for the Internet of Things The Internet of Things (IoT) has been a hot topic for the future of computing and communication. It will not only have a broad impact on our everyday life in the near future, but also create a new ecosystem involving a wide array of players such as device developers, service providers, software developers, network operators, and service users. In this paper, we present an open service framework for the Internet of Things, facilitating entrance into the IoT-related mass market, and establishing a global IoT ecosystem with the worldwide use of IoT devices and softwares. We expect that the open IoT service framework we proposed will play an important role in the widespread adoption of the Internet of Things in our everyday life, enhancing our quality of life with a large number of innovative applications and services, but also offering endless opportunities to all of the stakeholders in the world of information and communication technologies.
Joint Optimization of Task Scheduling and Image Placement in Fog Computing Supported Software-Defined Embedded System. Traditional standalone embedded system is limited in their functionality, flexibility, and scalability. Fog computing platform, characterized by pushing the cloud services to the network edge, is a promising solution to support and strengthen traditional embedded system. Resource management is always a critical issue to the system performance. In this paper, we consider a fog computing supported software-defined embedded system, where task images lay in the storage server while computations can be conducted on either embedded device or a computation server. It is significant to design an efficient task scheduling and resource management strategy with minimized task completion time for promoting the user experience. To this end, three issues are investigated in this paper: 1) how to balance the workload on a client device and computation servers, i.e., task scheduling, 2) how to place task images on storage servers, i.e., resource management, and 3) how to balance the I/O interrupt requests among the storage servers. They are jointly considered and formulated as a mixed-integer nonlinear programming problem. To deal with its high computation complexity, a computation-efficient solution is proposed based on our formulation and validated by extensive simulation based studies.
Edge-to-Edge Resource Discovery using Metadata Replication Edge computing has been recently introduced as an intermediary between Internet of Things (IoT) deployments and the cloud, providing data or control facilities to participating IoT devices. This includes actively supporting IoT resource discovery, something particularly pertinent when building large- scale, distributed and heterogeneous IoT systems. Moreover, edge devices supporting resource discovery are required to meet the stringent requirements prevalent in IoT systems including high availability, low-latency, and privacy. To this end, we present a resource discovery platform for IoT resources situated at the edge of the network. Our approach aims at providing a seamless discovery process that is able to (i) extend the covered area by deploying additional edge nodes and (ii) assist in the development of new IoT applications that target already available resources. Within our proposed platform, devices located in a certain proximity connect and form an edge-to-edge network that we call an edge neighborhood - our edge-to-edge metadata replication platform enables participating devices to discover available resources. Our solution is characterized by absence of centralization, as edge nodes exchange metadata about available resources within their scope in a peer-to-peer manner.
Threaded code The concept of “threaded code” is presented as an alternative to machine language code. Hardware and software realizations of it are given. In software it is realized as interpretive code not needing an interpreter. Extensions and optimizations are mentioned.
Leveraging on-chip voltage regulators as a countermeasure against side-channel attacks Side-channel attacks have become a significant threat to the integrated circuit security. Circuit level techniques are proposed in this paper as a countermeasure against side-channel attacks. A distributed on-chip power delivery system consisting of multi-level switched capacitor (SC) voltage converters is proposed where the individual interleaved stages are turned on and turned off either based on the workload information or pseudo-randomly to scramble the power consumption profile. In the case that the changes in the workload demand do not trigger the power delivery system to turn on or off individual stages, the active stages are reshuffled with so called converter-reshuffling to insert random spikes in the power consumption profile. An entropy based metric is developed to evaluate the security-performance of the proposed converter-reshuffling technique as compared to three other existing on-chip power delivery schemes. The increase in the power trace entropy with CoRe scheme is also demonstrated with simulation results to further verify the theoretical analysis.
The PARSEC benchmark suite: characterization and architectural implications This paper presents and characterizes the Princeton Application Repository for Shared-Memory Computers (PARSEC), a benchmark suite for studies of Chip-Multiprocessors (CMPs). Previous available benchmarks for multiprocessors have focused on high-performance computing applications and used a limited number of synchronization methods. PARSEC includes emerging applications in recognition, mining and synthesis (RMS) as well as systems applications which mimic large-scale multithreaded commercial programs. Our characterization shows that the benchmark suite covers a wide spectrum of working sets, locality, data sharing, synchronization and off-chip traffic. The benchmark suite has been made available to the public.
Side-Channel Leaks in Web Applications: A Reality Today, a Challenge Tomorrow With software-as-a-service becoming mainstream, more and more applications are delivered to the client through the Web. Unlike a desktop application, a web application is split into browser-side and server-side components. A subset of the application’s internal information flows are inevitably exposed on the network. We show that despite encryption, such a side-channel information leak is a realistic and serious threat to user privacy. Specifically, we found that surprisingly detailed sensitive information is being leaked out from a number of high-profile, top-of-the-line web applications in healthcare, taxation, investment and web search: an eavesdropper can infer the illnesses/medications/surgeries of the user, her family income and investment secrets, despite HTTPS protection; a stranger on the street can glean enterprise employees' web search queries, despite WPA/WPA2 Wi-Fi encryption. More importantly, the root causes of the problem are some fundamental characteristics of web applications: stateful communication, low entropy input for better interaction, and significant traffic distinctions. As a result, the scope of the problem seems industry-wide. We further present a concrete analysis to demonstrate the challenges of mitigating such a threat, which points to the necessity of a disciplined engineering practice for side-channel mitigations in future web application developments.
Wideband Balun-LNA With Simultaneous Output Balancing, Noise-Canceling and Distortion-Canceling An inductorless low-noise amplifier (LNA) with active balun is proposed for multi-standard radio applications between 100 MHz and 6 GHz. It exploits a combination of a common-gate (CGH) stage and an admittance-scaled common-source (CS) stage with replica biasing to maximize balanced operation, while simultaneously canceling the noise and distortion of the CG-stage. In this way, a noise figure (NF) close to or below 3 dB can be achieved, while good linearity is possible when the CS-stage is carefully optimized. We show that a CS-stage with deep submicron transistors can have high IIP2, because the nugsldr nuds cross-term in a two-dimensional Taylor approximation of the IDS(VGS, VDS) characteristic can cancel the traditionally dominant square-law term in the IDS(VGS) relation at practical gain values. Using standard 65 nm transistors at 1.2 V supply voltage, we realize a balun-LNA with 15 dB gain, NF < 3.5 dB and IIP2 > +20 dBm, while simultaneously achieving an IIP3 > 0 dBm. The best performance of the balun is achieved between 300 MHz to 3.5 GHz with gain and phase errors below 0.3 dB and plusmn2 degrees. The total power consumption is 21 mW, while the active area is only 0.01 mm2.
Cross-layer sensors for green cognitive radio. Green cognitive radio is a cognitive radio (CR) that is aware of sustainable development issues and deals with an additional constraint as regards the decision-making function of the cognitive cycle. In this paper, it is explained how the sensors distributed throughout the different layers of our CR model could help on taking the best decision in order to best contribute to sustainable development.
27.9 A 200kS/s 13.5b integrated-fluxgate differential-magnetic-to-digital converter with an oversampling compensation loop for contactless current sensing High voltage applications such as electric motor controllers, solar panel power inverters, electric vehicle battery chargers, uninterrupted and switching mode power supplies benefit from the galvanic isolation of contactless current sensors (CCS) [1]. These include magnetic sensors that sense the magnetic field emanating from a current-carrying conductor. The offset and resolution of Hall-effect sensors is in the μT-level [1-3], in contrast to the μT-level accuracy of integrated-fluxgate (IFG) magnetometers [4]. Previously reported sampled-data closed-loop IFG readouts have limited BWs as their sampling frequencies (4) are limited to be less than or equal to the IFG excitation frequency, fEXC [5-7]. This paper describes a differential closed-loop IFG CCS with fs>fEXC. The differential architecture rejects magnetic stray fields and achieves 750x larger BW than the prior closed-loop IFG readouts [6-7] with 10×better offset than the Hall-effect sensors [1-3].
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.24
0.24
0.24
0.24
0.12
0.12
0
0
0
0
0
0
0
0
A Fast and High Quality Multilevel Scheme for Partitioning Irregular Graphs Recently, a number of researchers have investigated a class of graph partitioning algorithms that reduce the size of the graph by collapsing vertices and edges, partition the smaller graph, and then uncoarsen it to construct a partition for the original graph (Bui and Jones, Proc. of the 6th SIAM Conference on Parallel Processing for Scientific Computing, 1993, 445-452; Hen- drickson and Leland, A Multilevel Algorithm for Partitioning Graphs, Tech. report SAND 93-1301, Sandia National Laboratories, Albuquerque, NM, 1993). From the early work it was clear that multilevel techniques held great promise; however, it was not known if they can be made to con- sistently produce high quality partitions for graphs arising in a wide range of application domains. We investigate the effectiveness of many different choices for all three phases: coarsening, partition of the coarsest graph, and refinement. In particular, we present a new coarsening heuristic (called heavy-edge heuristic) for which the size of the partition of the coarse graph is within a small factor of the size of the final partition obtained after multilevel refinement. We also present a much faster variation of the Kernighan-Lin (KL) algorithm for refining during uncoarsening. We test our scheme on a large number of graphs arising in various domains including finite element methods, linear pro- gramming, VLSI, and transportation. Our experiments show that our scheme produces partitions that are consistently better than those produced by spectral partitioning schemes in substantially smaller time. Also, when our scheme is used to compute fill-reducing orderings for sparse matrices, it produces orderings that have substantially smaller fill than the widely used multiple minimum degree algorithm.
Architecture Aware Partitioning Algorithms Existing partitioning algorithms provide limited support for load balancing simulations that are performed on heterogeneous parallel computing platforms. On such architectures, effec- tive load balancing can only be achieved if the graph is distributed so that it properly takes into account the available resources (CPU speed, network bandwidth). With heterogeneous tech- nologies becoming more popular, the need for suitable graph partitioning algorithms is criti- cal. We developed such algorithms that can address the partitioning requirements of scientific computations, and can correctly model the architectural characteristics of emerging hardware platforms.
AMD Fusion APU: Llano The Llano variant of the AMD Fusion accelerated processor unit (APU) deploys AMD Turbo CORE technology to maximize processor performance within the system's thermal design limits. Low-power design and performance/watt ratio optimization were key design approaches, and power gating is implemented pervasively across the APU.
Decoupling Data Supply from Computation for Latency-Tolerant Communication in Heterogeneous Architectures. In today’s computers, heterogeneous processing is used to meet performance targets at manageable power. In adopting increased compute specialization, however, the relative amount of time spent on communication increases. System and software optimizations for communication often come at the costs of increased complexity and reduced portability. The Decoupled Supply-Compute (DeSC) approach offers a way to attack communication latency bottlenecks automatically, while maintaining good portability and low complexity. Our work expands prior Decoupled Access Execute techniques with hardware/software specialization. For a range of workloads, DeSC offers roughly 2 × speedup, and additional specialized compression optimizations reduce traffic between decoupled units by 40%.
Tiny but mighty: designing and realizing scalable latency tolerance for manycore SoCs Modern computing systems employ significant heterogeneity and specialization to meet performance targets at manageable power. However, memory latency bottlenecks remain problematic, particularly for sparse neural network and graph analytic applications where indirect memory accesses (IMAs) challenge the memory hierarchy. Decades of prior art have proposed hardware and software mechanisms to mitigate IMA latency, but they fail to analyze real-chip considerations, especially when used in SoCs and manycores. In this paper, we revisit many of these techniques while taking into account manycore integration and verification. We present the first system implementation of latency tolerance hardware that provides significant speedups without requiring any memory hierarchy or processor tile modifications. This is achieved through a Memory Access Parallel-Load Engine (MAPLE), integrated through the Network-on-Chip (NoC) in a scalable manner. Our hardware-software co-design allows programs to perform long-latency memory accesses asynchronously from the core, avoiding pipeline stalls, and enabling greater memory parallelism (MLP). In April 2021 we taped out a manycore chip that includes tens of MAPLE instances for efficient data supply. MAPLE demonstrates a full RTL implementation of out-of-core latency-mitigation hardware, with virtual memory support and automated compilation targetting it. This paper evaluates MAPLE integrated with a dual-core FPGA prototype running applications with full SMP Linux, and demonstrates geomean speedups of 2.35× and 2.27× over software-based prefetching and decoupling, respectively. Compared to state-of-the-art hardware, it provides geomean speedups of 1.82× and 1.72× over prefetching and decoupling techniques.
OpenCGRA: An Open-Source Unified Framework for Modeling, Testing, and Evaluating CGRAs Coarse-grained reconfigurable arrays (CGRAs), loosely defined as arrays of functional units (e.g., adder, subtractor, multiplier, divider, or larger multi-operation units, but smaller than a general-purpose core) interconnected through a Network-on-Chip, provide higher flexibility than domain-specific ASIC accelerators while offering increased hardware efficiency with respect to fine-grained reconfigurable devices, such as Field Programmable Gate Arrays (FPGAs). The fast evolving fields of machine learning and edge computing, which are seeing a continuous flow of novel algorithms and larger models, make CGRAs ideal architectures to allow domain specialization without losing too much generality. Designing and generating a CGRA, however, still requires to define the type and number of the specific functional units, implement their interconnect and the network topology, and perform the simulation and validation, given a variety of workloads of interest. In this paper, we propose OpenC-GRA *, the first open-source integrated framework that is able to support the full top-to-bottom design flow for specializing and implementing CGRAs: modeling at different abstraction levels (functional level, cycle level, register-transfer level) with compiler support, verification at different granularities (unit testing, integration testing, property-based testing), simulation, generation of synthesizable Verilog, and characterization (area, power, and timing). By using OpenCGRA, it only takes a few hours to build a specialized power- and area-efficient CGRA throughout the entire design flow given a set of applications of interest. OpenCGRA is available online at https://github.com/pnnl/OpenCGRA.
Livia: Data-Centric Computing Throughout the Memory Hierarchy In order to scale, future systems will need to dramatically reduce data movement. Data movement is expensive in current designs because (i) traditional memory hierarchies force computation to happen unnecessarily far away from data and (ii) processing-in-memory approaches fail to exploit locality. We propose Memory Services, a flexible programming model that enables data-centric computing throughout the memory hierarchy. In Memory Services, applications express functionality as graphs of simple tasks, each task indicating the data it operates on. We design and evaluate Livia, a new system architecture for Memory Services that dynamically schedules tasks and data at the location in the memory hierarchy that minimizes overall data movement. Livia adds less than 3% area overhead to a tiled multicore and accelerates challenging irregular workloads by 1.3 × to 2.4 × while reducing dynamic energy by 1.2× to 4.7×.
Work-Efficient Parallel GPU Methods for Single-Source Shortest Paths Finding the shortest paths from a single source to all other vertices is a fundamental method used in a variety of higher-level graph algorithms. We present three parallel friendly and work-efficient methods to solve this Single-Source Shortest Paths (SSSP) problem: Work front Sweep, Near-Far and Bucketing. These methods choose different approaches to balance the trade off between saving work and organizational overhead. In practice, all of these methods do much less work than traditional Bellman-Ford methods, while adding only a modest amount of extra work over serial methods. These methods are designed to have a sufficient parallel workload to fill modern massively-parallel machines, and select reorganizational schemes that map well to these architectures. We show that in general our Near-Far method has the highest performance on modern GPUs, outperforming other parallel methods. We also explore a variety of parallel load-balanced graph traversal strategies and apply them towards our SSSP solver. Our work-saving methods always outperform a traditional GPU Bellman-Ford implementation, achieving rates up to 14x higher on low-degree graphs and 340x higher on scale free graphs. We also see significant speedups (20-60x) when compared against a serial implementation on graphs with adequately high degree.
HotSpot: A Compact Thermal Modeling Methodology for Early-Stage VLSI Design This paper presents HotSpot-a modeling methodology for developing compact thermal models based on the popular stacked-layer packaging scheme in modern very large-scale integration systems. In addition to modeling silicon and packaging layers, HotSpot includes a high-level on-chip interconnect self-heating power and thermal model such that the thermal impacts on interconnects can also be considered...
Secure Page Fusion with VUsion: https: //www.vusec.net/projects/VUsion. To reduce memory pressure, modern operating systems and hypervisors such as Linux/KVM deploy page-level memory fusion to merge physical memory pages with the same content (i.e., page fusion). A write to a fused memory page triggers a copy-on-write event that unmerges the page to preserve correct semantics. While page fusion is crucial in saving memory in production, recent work shows significant security weaknesses in its current implementations. Attackers can abuse timing side channels on the unmerge operation to leak sensitive data such as randomized pointers. Additionally, they can exploit the predictability of the merge operation to massage physical memory for reliable Rowhammer attacks. In this paper, we present VUsion, a secure page fusion system. VUsion can stop all the existing and even new classes of attack, where attackers leak information by side-channeling the merge operation or massage physical memory via predictable memory reuse patterns. To mitigate information disclosure attacks, we ensure attackers can no longer distinguish between fused and non-fused pages. To mitigate memory massaging attacks, we ensure fused pages are always allocated from a high-entropy pool. Despite its secure design, our comprehensive evaluation shows that VUsion retains most of the memory saving benefits of traditional memory fusion with negligible performance overhead while maintaining compatibility with other advanced memory management features.
Distributed computation in dynamic networks In this paper we investigate distributed computation in dynamic networks in which the network topology changes from round to round. We consider a worst-case model in which the communication links for each round are chosen by an adversary, and nodes do not know who their neighbors for the current round are before they broadcast their messages. The model captures mobile networks and wireless networks, in which mobility and interference render communication unpredictable. In contrast to much of the existing work on dynamic networks, we do not assume that the network eventually stops changing; we require correctness and termination even in networks that change continually. We introduce a stability property called T -interval connectivity (for T = 1), which stipulates that for every T consecutive rounds there exists a stable connected spanning subgraph. For T = 1 this means that the graph is connected in every round, but changes arbitrarily between rounds. We show that in 1-interval connected graphs it is possible for nodes to determine the size of the network and compute any com- putable function of their initial inputs in O(n2) rounds using messages of size O(log n + d), where d is the size of the input to a single node. Further, if the graph is T-interval connected for T 1, the computation can be sped up by a factor of T, and any function can be computed in O(n + n2/T) rounds using messages of size O(log n + d). We also give two lower bounds on the token dissemination problem, which requires the nodes to disseminate k pieces of information to all the nodes in the network. The T-interval connected dynamic graph model is a novel model, which we believe opens new avenues for research in the theory of distributed computing in wireless, mobile and dynamic networks.
Store-and-Forward Buffer Requirements in a Packet Switching Network Previous analytic models for packet switching networks have always assumed infinite storage capacity in store-store-and-forward (S/F) nodes. In this paper, we relax this assumption and present a model for a packet switching network in which each node has a finite pool of S/F buffers. A packet arriving at a node in which all S/F buffers are temporarily filled is discarded. The channel transmission control mechanisms of positive acknowledgment and time-out of packets are included in this model. Individual S/F nodes are analyzed separately as queueing networks with different classes of packets. The single node results are interfaced by imposing a continuity of flow constraint. A heuristic algorithm for determining a balanced assignment of nodal S/F buffer capacities is proposed. Numerical results for the performance of a 19 node network are illustrated.
Synchronization of stochastic dynamical networks under impulsive control with time delays. In this paper, the stochastic synchronization problem is studied for a class of delayed dynamical networks under delayed impulsive control. Different from the existing results on the synchronization of dynamical networks under impulsive control, impulsive input delays are considered in our model. By assuming that the impulsive intervals belong to a certain interval and using the mathematical induction method, several conditions are derived to guarantee that complex networks are exponentially synchronized in mean square. The derived conditions reveal that the frequency of impulsive occurrence, impulsive input delays, and stochastic perturbations can heavily affect the synchronization performance. A control algorithm is then presented for synchronizing stochastic dynamical networks with delayed synchronizing impulses. Finally, two examples are given to demonstrate the effectiveness of the proposed approach.
A Sub- $\mu$ W Reconfigurable Front-End for Invasive Neural Recording That Exploits the Spectral Characteristics of the Wideband Neural Signal This paper presents a sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> ac-coupled reconfigurable front-end for invasive wideband neural signal recording. The proposed topology embeds filtering capabilities enabling the selection of different frequency bands inside the neural signal spectrum. Power consumption is optimized by defining specific noise targets for each sub-band. These targets take into account the spectral characteristics of wideband neural signals: local field potentials (LFP) exhibit <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mathrm {1/f^{x}}$ </tex-math></inline-formula> magnitude scaling while action potentials (AP) show uniform magnitude across frequency. Additionally, noise targets also consider electrode noise and the spectral distribution of noise sources in the circuit. An experimentally verified prototype designed in a standard 180 nm CMOS process draws 815 nW from a 1 V supply. The front-end is able to select among four different frequency bands (modes) up to 5 kHz. The measured input-referred spot-noise at 500 Hz in the LFP mode (1 Hz - 700 Hz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$55~nV/\sqrt {Hz}$ </tex-math></inline-formula> while the integrated noise in the AP mode (200 Hz - 5 kHz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$4.1~\mu Vrms$ </tex-math></inline-formula> . The proposed front-end achieves sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> operation without penalizing other specifications such as input swing, common-mode or power-supply rejection ratios. It reduces the power consumption of neural front-ends with spectral selectivity by <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$6.1\times $ </tex-math></inline-formula> and, compared with conventional wideband front-ends, it obtains a reduction of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.5\times $ </tex-math></inline-formula> .
1.043571
0.04
0.04
0.04
0.04
0.02
0.006667
0.001336
0.000167
0
0
0
0
0
The Interdomain Connectivity of PlanetLab Nodes In this paper we investigate the interdomain connectivity of PlanetLab nodes. We note that about 85 percent of the hosts are located within what we call the Global Research and Educational Network (GREN) - an interconnected network of high speed research networks such as Internet2 in the USA and Dante in Europe. Since traffic with source and destination on the GREN is very likely to be transited solely by the GREN, this means that over 70 percent of the end-to-end measurements between PlanetLab node pairs represent measurements of GREN characteristics. We suggest that it may be possible to systematically choose the placement of new nodes so that as the PlanetLab platform grows it becomes a closer and closer approximation to the Global Internet.
Autonomic Live Adaptation of Virtual Computational Environments in a Multi-Domain Infrastructure A shared distributed infrastructure is formed by federating computation resources from multiple domains. Such shared infrastructures are increasing in popularity and are providing massive amounts of aggregated computation resources to large numbers of users. Meanwhile, virtualization technologies, at machine and network levels, are maturing and enabling mutually isolated virtual computation environments for executing arbitrary parallel/distributed applications on top of such a shared physical infrastructure. In this paper; we go one step further by supporting autonomic adaptation of virtual computation environments as active, integrated entities. More specifically, driven by both dynamic availability of infrastructure resources and dynamic application resource demand, a virtual computation environment is able to automatically relocate itself across the infrastructure and scale its share of infrastructural resources. Such autonomic adaptation is transparent to both users of virtual environments and administrators of infrastructures, maintaining the look and feel of a stable, dedicated environment for the user As our proof-of-concept, we present the design, implementation and evaluation of a system called VIOLIN, which is composed of a virtual network of virtual machines capable of live migration across a multi-domain physical infrastructure.
The design and implementation of OGSA-DQP: A service-based distributed query processor Service-based approaches are rising to prominence because of their potential to meet the requirements for distributed application development in e-business and e-science. The emergence of a service-oriented view of hardware and software resources raises the question as to how database management systems and technologies can best be deployed or adapted for use in such an environment. This paper explores one aspect of service-based computing and data management, viz., how to integrate query processing technology with a service-based architecture suitable for a Grid environment. The paper addresses this by describing in detail the design and implementation of a service-based distributed query processor. The query processor is service-based in two orthogonal senses: firstly, it supports querying over data storage and analysis resources that are made available as services, and, secondly, its internal architecture factors out as services the functionalities related to the construction and execution of distributed query plans. The resulting system both provides a declarative approach to service orchestration, and demonstrates how query processing can benefit from a service-based architecture. As well as describing and motivating the architecture used, the paper also describes usage scenarios, and, using a bioinformatics application, presents performance results that benchmark the system and illustrate the benefits provided by the service-based architecture.
Mobile code enabled Web services A primary benefit of Web services is that they provide a uniform implementation-independent mechanism for accessing distributed services. Building and deploying such services do not benefit from the same advantages, however. Different Web services containers are implemented in different programming languages, with different constraints and requirements placed on the programmer. Moreover, client side programmers must use the Web service interface specified by the service developer. Therefore, the kinds of applications and uses for a Web service are unnecessarily restrictive, constrained by the granularity of access defined by the interface and by the characteristics of the service functions. This paper describes an approach that addresses both of these drawbacks by enabling Web service containers with the ability to accept new mobile code on the fly, and to run it within the containers, providing direct local access to the containers' other services. The code can be specified in a small simple language (a subset of C), and translated and passed to the container in a common XML-based intermediate language called X#. This approach effectively removes the dependence on any single implementation environment. Our prototype implementation for two different containers demonstrates the feasibility of the approach, which represents a first step toward write-once deploy-anywhere Web services.
Adding dynamism to OGSA-DQP: incorporating the DynaSOAr framework in distributed query processing OGSA-DQP is a Distributed Query Processing system for the Grid. It uses the OGSA-DAI framework for querying individual databases and adds on top of it an infrastructure to perform distributed querying on these databases. OGSA-DQP also enables the invocation of analysis services, such as Blast, within the query itself, thereby creating a form of declarative workflow system. DynaSOAr is an infrastructure for dynamically deploying web services over a Grid or a set of networked resources. The DynaSOAr view of grid computing revolves around the concept of services, rather than jobs where services are deployed on demand to meet the changing performance requirements. This paper describes the merging of these two frameworks to enable a certain amount of dynamic deployment to take place within distributed query processing.
Dynamically Deploying Web Services on a Grid using Dynasoar Dynasoar is an infrastructure for dynamically deploying Web services over a grid or the Internet. It enables an approach to grid computing in which distributed applications are built around services instead of jobs. Dynasoar automatically deploys a service on an available host if no existing deployments exist, or if performance requirements cannot be met by existing deployments. This is analogous to remote job scheduling, but offers the opportunity for improved performance as the cost of moving and deploying the service can be shared across the processing of many messages. A key feature of the architecture is that it makes a clear separation between Web service providers, who offer services to consumers, and host providers, who offer computational resources on which services can be deployed, and messages sent to them processed. Separating these two components and defining their interactions, opens up the opportunity for interesting new organisational/business models
Efficient Broadcast in Structured P2P Networks In this position paper, we present an efficient algorithm for performing a broadcast operation with minimal cost in structured DHT-based P2P networks. In a system of N nodes, a broadcast message originating at an arbitrary node reaches all other nodes after exactly N - 1 messages. We emphasize the perception of a class of DHT systems as a form of distributed k-ary search and we take advantage of that perception in constructing a spanning tree that is utilized for efficient broadcasting. We consider broadcasting as a basic service that adds to existing DHTs the ability to search using arbitrary queries as well as dissiminate/collect global information.
Merged Two-Stage Power Converter With Soft Charging Switched-Capacitor Stage in 180 nm CMOS In this paper, we introduce a merged two-stage dc-dc power converter for low-voltage power delivery. By separating the transformation and regulation function of a dc-dc power converter into two stages, both large voltage transformation and high switching frequency can be achieved. We show how the switched-capacitor stage can operate under soft charging conditions by suitable control and integration (merging) of the two stages. This mode of operation enables improved efficiency and/or power density in the switched-capacitor stage. A 5-to-1 V, 0.8 W integrated dc-dc converter has been developed in 180 nm CMOS. The converter achieves a peak efficiency of 81%, with a regulation stage switching frequency of 10 MHz.
Disk Paxos We present an algorithm, called Disk Paxos, for implementing a reliable distributed system with a network of processors and disks. Like the original Paxos algorithm, Disk Paxos maintains consistency in the presence of arbitrary non-Byzantine faults. Progress can be guaranteed as long as a majority of the disks are available, even if all processors but one have failed.
Distributed multi-agent optimization with state-dependent communication We study distributed algorithms for solving global optimization problems in which the objective function is the sum of local objective functions of agents and the constraint set is given by the intersection of local constraint sets of agents. We assume that each agent knows only his own local objective function and constraint set, and exchanges information with the other agents over a randomly varying network topology to update his information state. We assume a state-dependent communication model over this topology: communication is Markovian with respect to the states of the agents and the probability with which the links are available depends on the states of the agents. We study a projected multi-agent subgradient algorithm under state-dependent communication. The state-dependence of the communication introduces significant challenges and couples the study of information exchange with the analysis of subgradient steps and projection errors. We first show that the multi-agent subgradient algorithm when used with a constant stepsize may result in the agent estimates to diverge with probability one. Under some assumptions on the stepsize sequence, we provide convergence rate bounds on a “disagreement metric” between the agent estimates. Our bounds are time-nonhomogeneous in the sense that they depend on the initial starting time. Despite this, we show that agent estimates reach an almost sure consensus and converge to the same optimal solution of the global optimization problem with probability one under different assumptions on the local constraint sets and the stepsize sequence.
A 10-Gb/s CMOS clock and data recovery circuit with a half-rate binary phase/frequency detector A 10-Gb/s phase-locked clock and data recovery circuit incorporates a multiphase LC oscillator and a half-rate phase/frequency detector with automatic data retiming. Fabricated in 0.18-μm CMOS technology in an area of 1.75×1.55 mm2, the circuit exhibits a capture range of 1.43 GHz, an rms jitter of 0.8 ps, a peak-to-peak jitter of 9.9 ps, and a bit error rate of 10-9 with a pseudorandom bit sequence of 223-1. The power dissipation excluding the output buffers is 91 mW from a 1.8-V supply.
The accelerator store: A shared memory framework for accelerator-based systems This paper presents the many-accelerator architecture, a design approach combining the scalability of homogeneous multi-core architectures and system-on-chip's high performance and power-efficient hardware accelerators. In preparation for systems containing tens or hundreds of accelerators, we characterize a diverse pool of accelerators and find each contains significant amounts of SRAM memory (up to 90&percnt; of their area). We take advantage of this discovery and introduce the accelerator store, a scalable architectural component to minimize accelerator area by sharing its memories between accelerators. We evaluate the accelerator store for two applications and find significant system area reductions (30&percnt;) in exchange for small overheads (2&percnt; performance, 0&percnt;--8&percnt; energy). The paper also identifies new research directions enabled by the accelerator store and the many-accelerator architecture.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.2105
0.2105
0.2105
0.2105
0.1405
0.055417
0.000175
0
0
0
0
0
0
0
Computing with time: microarchitectural weird machines ABSTRACTSide-channel attacks such as Spectre rely on properties of modern CPUs that permit discovery of microarchitectural state via timing of various operations. The Weird Machine concept is an increasingly popular model for characterization of emergent execution that arises from side-effects of conventional computing constructs. In this work we introduce Microarchitectural Weird Machines (µWM): code constructions that allow performing computation through the means of side effects and conflicts between microarchitectual entities such as branch predictors and caches. The results of such computations are observed as timing variations. We demonstrate how µWMs can be used as a powerful obfuscation engine where computation operates based on events unobservable to conventional anti-obfuscation tools based on emulation, debugging, static and dynamic analysis techniques. We demonstrate that µWMs can be used to reliably perform arbitrary computation by implementing a SHA-1 hash function. We then present a practical example in which we use a µWM to obfuscate malware code such that its passive operation is invisible to an observer with full power to view the architectural state of the system until the code receives a trigger. When the trigger is received the malware decrypts and executes its payload. To show the effectiveness of obfuscation we demonstrate its use in the concealment and subsequent execution of a payload that exfiltrates a shadow password file, and a payload that creates a reverse shell.
SPONGENT: a lightweight hash function This paper proposes spongent - a family of lightweight hash functions with hash sizes of 88 (for preimage resistance only), 128, 160, 224, and 256 bits based on a sponge construction instantiated with a present-type permutation, following the hermetic sponge strategy. Its smallest implementations in ASIC require 738, 1060, 1329, 1728, and 1950 GE, respectively. To our best knowledge, at all security levels attained, it is the hash function with the smallest footprint in hardware published so far, the parameter being highly technology dependent. spongent offers a lot of flexibility in terms of serialization degree and speed. We explore some of its numerous implementation trade-offs. We furthermore present a security analysis of spongent. Basing the design on a present-type primitive provides confidence in its security with respect to the most important attacks. Several dedicated attack approaches are also investigated.
Non-crypto Hardware Hash Functions for High Performance Networking ASICs Hash functions are vital in networking. Hash-based algorithms are increasingly deployed in mission-critical, high speed network devices. These devices will need small, quick, hardware hash functions to keep up with Internet growth. There are many hardware hash functions used in this situation, foremost among them CRC-32. We develop parametrized methods for evaluating hash function output quality so as to better compare similar hash functions. We use these methods to explore the quality of candidate hash functions, including CRC-32, H3 (with fixed seed), MD5 and others. We also propose optimized building blocks for hardware hash functions based on SP-networks. Given a size budget of 4K gates and only 1 cycle to compute the result, we demonstrate a 128 bit input, 64 bit output hash function built using this framework that ranks highly in our tests.
RECTANGLE: a bit-slice lightweight block cipher suitable for multiple platforms. In this paper, we propose a new lightweight block cipher named RECTANGLE. The main idea of the design of RECTANGLE is to allow lightweight and fast implementations using bit-slice techniques. RECTANGLE uses an SP-network. The substitution layer consists of 16 4 4 S-boxes in parallel. The permutation layer is composed of 3 rotations. As shown in this paper, RECTAN- GLE offers great performance in both hardware and software environment, which provides enough flexibility for different application scenario. The following are 3 main advantages of RECTANGLE. First, RECTANGLE is extremely hardware-friendly. For the 80-bit key version, a one-cycle-per-round parallel implementation only needs 1600 gates for a throughput of 246 Kbits/sec at 100 KHz clock and an energy efficiency of 3.0 pJ/bit. Second, RECTANGLE achieves a very competitive software speed among the existing lightweight block ciphers due to its bit-slice style. Using 128-bit SSE instruc- tions, a bit-slice implementation of RECTANGLE reaches an average encryption speed of about 3.9 cycles/byte for messages around 3000 bytes. Last, but not least, we propose new design criteria for the RECTANGLE S-box. Due to our careful selection of the S-box and the asymmetric design of the permutation layer, RECTANGLE achieves a very good security-performance tradeoff. Our extensive and deep security analysis shows that the highest number of rounds that we can attack, is 18 (out of 25).
Securing Branch Predictors with Two-Level Encryption Modern processors rely on various speculative mechanisms to meet performance demand. Branch predictors are one of the most important micro-architecture components to deliver performance. However, they have been under heavy scrutiny because of recent side-channel attacks. Branch predictors are indexed using the PC and recent branch histories. An adversary can manipulate these parameters to access and control the same branch predictor entry that a victim uses. Recent Spectre attacks exploit this to set up speculative-execution-based security attacks. In this article, we aim to mitigate branch predictor side-channels using two-level encryption. At the first level, we randomize the set-index by encrypting the PC using a per-context secret key. At the second level, we encrypt the data in each branch predictor entry. While periodic key changes make the branch predictor more secure, performance degradation can be significant. To alleviate performance degradation, we propose a practical set update mechanism that also considers parallelism in multi-banked branch predictors. We show that our mechanism exhibits only 1.0% and 0.2% performance degradation while changing keys every 10K and 50K cycles, respectively, which is much lower than other state-of-the-art approaches.
Exploring Branch Predictors for Constructing Transient Execution Trojans Transient execution is one of the most critical features used in CPUs to achieve high performance. Recent Spectre attacks demonstrated how this feature can be manipulated to force applications to reveal sensitive data. The industry quickly responded with a series of software and hardware mitigations among which microcode patches are the most prevalent and trusted. In this paper, we argue that currently deployed protections still leave room for constructing attacks. We do so by presenting transient trojans, software modules that conceal their malicious activity within transient execution mode. They appear completely benign, pass static and dynamic analysis checks, but reveal sensitive data when triggered. To construct these trojans, we perform a detailed analysis of the attack surface currently present in today's systems with respect to the recommended mitigation techniques. We reverse engineer branch predictors in several recent x86_64 processors which allows us to uncover previously unknown exploitation techniques. Using these techniques, we construct three types of transient trojans and demonstrate their stealthiness and practicality.
ret2spec: Speculative Execution Using Return Stack Buffers. Speculative execution is an optimization technique that has been part of CPUs for over a decade. It predicts the outcome and target of branch instructions to avoid stalling the execution pipeline. However, until recently, the security implications of speculative code execution have not been studied. In this paper, we investigate a special type of branch predictor that is responsible for predicting return addresses. To the best of our knowledge, we are the first to study return address predictors and their consequences for the security of modern software. In our work, we show how return stack buffers (RSBs), the core unit of return address predictors, can be used to trigger misspeculations. Based on this knowledge, we propose two new attack variants using RSBs that give attackers similar capabilities as the documented Spectre attacks. We show how local attackers can gain arbitrary speculative code execution across processes, e.g., to leak passwords another user enters on a shared system. Our evaluation showed that the recent Spectre countermeasures deployed in operating systems can also cover such RSB-based cross-process attacks. Yet we then demonstrate that attackers can trigger misspeculation in JIT environments in order to leak arbitrary memory content of browser processes. Reading outside the sandboxed memory region with JIT-compiled code is still possible with 80% accuracy on average.
Wireless sensor networks: a survey This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed.
Fuzzy tracking control design for nonlinear dynamic systems via T-S fuzzy model This study introduces a fuzzy control design method for nonlinear systems with a guaranteed H∞ model reference tracking performance. First, the Takagi and Sugeno (TS) fuzzy model is employed to represent a nonlinear system. Next, based on the fuzzy model, a fuzzy observer-based fuzzy controller is developed to reduce the tracking error as small as possible for all bounded reference inputs. The advantage of proposed tracking control design is that only a simple fuzzy controller is used in our approach without feedback linearization technique and complicated adaptive scheme. By the proposed method, the fuzzy tracking control design problem is parameterized in terms of a linear matrix inequality problem (LMIP). The LMIP can be solved very efficiently using the convex optimization techniques. Simulation example is given to illustrate the design procedures and tracking performance of the proposed method
Incremental Stochastic Subgradient Algorithms for Convex Optimization This paper studies the effect of stochastic errors on two constrained incremental subgradient algorithms. The incremental subgradient algorithms are viewed as decentralized network optimization algorithms as applied to minimize a sum of functions, when each component function is known only to a particular agent of a distributed network. First, the standard cyclic incremental subgradient algorithm is studied. In this, the agents form a ring structure and pass the iterate in a cycle. When there are stochastic errors in the subgradient evaluations, sufficient conditions on the moments of the stochastic errors are obtained that guarantee almost sure convergence when a diminishing step-size is used. In addition, almost sure bounds on the algorithm's performance with a constant step-size are also obtained. Next, the Markov randomized incremental subgradient method is studied. This is a noncyclic version of the incremental algorithm where the sequence of computing agents is modeled as a time nonhomogeneous Markov chain. Such a model is appropriate for mobile networks, as the network topology changes across time in these networks. Convergence results and error bounds for the Markov randomized method in the presence of stochastic errors for diminishing and constant step-sizes are obtained.
Enabling open-source cognitively-controlled collaboration among software-defined radio nodes Software-defined radios (SDRs) are now recognized as a key building block for future wireless communications. We have spent the past year enhancing existing open software to create a software-defined data radio. This radio extends the notion of software-defined behavior to higher layers in the protocol stack: most importantly through the media access layer. Our particular approach to the problem has been guided by the desire to allow fine-grained cognitive control of the radio. We describe our system, Adaptive Dynamic Radio Open-source Intelligent Team (ADROIT).
A 60-GHz 16QAM/8PSK/QPSK/BPSK Direct-Conversion Transceiver for IEEE802.15.3c. This paper presents a 60-GHz direct-conversion transceiver using 60-GHz quadrature oscillators. The transceiver has been fabricated in a standard 65-nm CMOS process. It in cludes a receiver with a 17.3-dB conversion gain and less than 8.0-dB noise figure, a transmitter with a 18.3-dB conversion gain, a 9.5-dBm output 1 dB compression point, a 10.9-dBm saturation output power and 8.8-% power added ...
CCFI: Cryptographically Enforced Control Flow Integrity Control flow integrity (CFI) restricts jumps and branches within a program to prevent attackers from executing arbitrary code in vulnerable programs. However, traditional CFI still offers attackers too much freedom to chose between valid jump targets, as seen in recent attacks. We present a new approach to CFI based on cryptographic message authentication codes (MACs). Our approach, called cryptographic CFI (CCFI), uses MACs to protect control flow elements such as return addresses, function pointers, and vtable pointers. Through dynamic checks, CCFI enables much finer-grained classification of sensitive pointers than previous approaches, thwarting all known attacks and resisting even attackers with arbitrary access to program memory. We implemented CCFI in Clang/LLVM, taking advantage of recently available cryptographic CPU instructions (AES-NI). We evaluate our system on several large software packages (including nginx, Apache and memcache) as well as all their dependencies. The cost of protection ranges from a 3--18% decrease in server request rate. We also expect this overhead to shrink as Intel improves the performance AES-NI.
A VCO-Based Nonuniform Sampling ADC Using a Slope-Dependent Pulse Generator This paper presents a voltage-controlled oscillator (VCO)-based nonuniform sampling analog-to-digital converter (ADC) as an alternative to the level-crossing (LC)-based converters for digitizing biopotential signals. This work aims to provide a good signal-to-noise-and-distortion ratio at a low average sampling rate. In the proposed conversion method, a slope-dependent pulse generation block is used to provide a variable sample rate adjusted according to the input signal's slope. Simulation results show that the introduced method meets a target reconstruction quality with a sampling rate approaching 92 Sps, while on the same MIT-BIH Arrhythmia N 106 ECG benchmark, the classic LC-based approach requires a sampling rate higher than 500 Sps. The benefits of the proposed method are more remarkable when the input signal is very noisy. The proposed ADC achieves a compression ratio close to 4, but with only 5.4% root-mean-square difference when tested using the MIT-BIH Arrhythmia Database.
1.2
0.2
0.2
0.2
0.2
0.1
0.022222
0
0
0
0
0
0
0
Initializing newly deployed ad hoc and sensor networks A newly deployed multi-hop radio network is unstructured and lacks a reliable and efficient communication scheme. In this paper, we take a step towards analyzing the problems existing during the initialization phase of ad hoc and sensor networks. Particularly, we model the network as a multi-hop quasi unit disk graph and allow nodes to wake up asynchronously at any time. Further, nodes do not feature a reliable collision detection mechanism, and they have only limited knowledge about the network topology. We show that even for this restricted model, a good clustering can be computed efficiently. Our algorithm efficiently computes an asymptotically optimal clustering. Based on this algorithm, we describe a protocol for quickly establishing synchronized sleep and listen schedule between nodes within a cluster. Additionally, we provide simulation results in a variety of settings.
The emergence of a networking primitive in wireless sensor networks The wireless sensor network community approached networking abstractions as an open question, allowing answers to emerge with time and experience. The Trickle algorithm has become a basic mechanism used in numerous protocols and systems. Trickle brings nodes to eventual consistency quickly and efficiently while remaining remarkably robust to variations in network density, topology, and dynamics. Instead of flooding a network with packets, Trickle uses a "polite gossip" policy to control send rates so each node hears just enough packets to stay consistent. This simple mechanism enables Trickle to scale to 1000-fold changes in network density, reach consistency in seconds, and require only a few bytes of state yet impose a maintenance cost of a few sends an hour. Originally designed for disseminating new code, experience has shown Trickle to have much broader applicability, including route maintenance and neighbor discovery. This paper provides an overview of the research challenges wireless sensor networks face, describes the Trickle algorithm, and outlines several ways it is used today.
A survey on routing protocols for wireless sensor networks Recent advances in wireless sensor networks have led to many new protocols specifically designed for sensor networks where energy awareness is an essential consideration. Most of the attention, however, has been given to the routing protocols since they might differ depending on the application and network architecture. This paper surveys recent routing protocols for sensor networks and presents a classification for the various approaches pursued. The three main categories explored in this paper are data-centric, hierarchical and location-based. Each routing protocol is described and discussed under the appropriate category. Moreover, protocols using contemporary methodologies such as network flow and quality of service modeling are also discussed. The paper concludes with open research issues.
Initializing sensor networks of non-uniform density in the weak sensor model Assumptions about node density in the Sensor Networks literature are frequently too strong or too weak. Neither absolutely arbitrary nor uniform deployment seem feasible in most of the intended applications of sensor nodes. We present a Weak Sensor Model-compatible distributed protocol for hop-optimal network initialization, under the assumption that the maximum density of nodes is some value Δ known by all of the nodes. In order to prove lower bounds, we observe that all nodes must communicate with some other node in order to join the network, and we call the problem of achieving such a communication the Group Therapy Problem. We show lower bounds for the Group Therapy Problem in Radio Networks of maximum density Δ, regardless of the use of randomization, and a stronger lower bound for the important class of randomized fair protocols. We also show that even when nodes are distributed uniformly, the same lower bound holds, even in expectation and even for the simpler problem of Clear Transmission.
Local Divergence of Markov Chains and the Analysis of Iterative Load-Balancing Schemes We develop a general technique for the quantitative analysis of iterative distributed load balancing schemes. We illustrate the technique by studying two simple, intuitively appealing models that are prevalent in the literature: the diffusive paradigm, and periodic balancing circuits (or the dimension exchange paradigm). It is well known that such load balancing schemes can be roughly modeled by Markov chains, but also that this approximation can be quite inaccurate. Our main contribution is an effective way of characterizing the deviation between the actual loads and the distribution generated by a related Markov chain, in terms of a natural quantity which we call the local divergence. We apply this technique to obtain bounds on the number of rounds required to achieve coarse balancing in general networks, cycles and meshes in these models. For balancing circuits, we also present bounds for the stronger requirement of perfect balancing, or counting.
Synopsis diffusion for robust aggregation in sensor networks Abstract Aggregating sensor readings within the network is an essen - tial technique for conserving energy in sensor networks Pre - vious work proposes aggregating along a tree overlay topol - ogy in order to conserve energy However, a tree overlay is very fragile, and the high rate of node and link failures in sensor networks often results in a large fraction of readings being unaccounted for in the aggregate Value splitting on multi - path overlays, as proposed in TAG, reduces the vari - ance in the error, but still results in signi cant errors Pre - vious approaches are fragile, fundamentally, because they tightly couple aggregate computation and message routing In this paper, we propose a family of aggregation techniques, called synopsis diffusion , that decouples the two, enabling aggregation algorithms and message routing to be optimized independently As a result, the level of redundancy in mes - sage routing (as a trade - off with energy consumption) can be adapted to both expected and encountered network condi - tions We present a number of concrete examples of synopsis diffusion algorithms, including a broadcast - based instantia - tion of synopsis diffusion that is as energy ef cient as a tree, but dramatically more robust
Energy-Efficient Communication Protocol for Wireless Microsensor Networks Wireless distributed micro-sensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks.Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multihop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster base stations (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic net-works, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show that LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional routing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.
Coordinated consensus in dynamic networks We study several variants of coordinated consensus in dynamic networks. We assume a synchronous model, where the communication graph for each round is chosen by a worst-case adversary. The network topology is always connected, but can change completely from one round to the next. The model captures mobile and wireless networks, where communication can be unpredictable. In this setting we study the fundamental problems of eventual, simultaneous, and Δ-coordinated consensus, as well as their relationship to other distributed problems, such as determining the size of the network. We show that in the absence of a good initial upper bound on the size of the network, eventual consensus is as hard as computing deterministic functions of the input, e.g., the minimum or maximum of inputs to the nodes. We also give an algorithm for computing such functions that is optimal in every execution. Next, we show that simultaneous consensus can never be achieved in less than n - 1 rounds in any execution, where n is the size of the network; consequently, simultaneous consensus is as hard as computing an upper bound on the number of nodes in the network. For Δ-coordinated consensus, we show that if the ratio between nodes with input 0 and input 1 is bounded away from 1, it is possible to decide in time n-Θ(√ nΔ), where Δ bounds the time from the first decision until all nodes decide. If the dynamic graph has diameter D, the time to decide is min{O(nD/Δ),n-Ω(nΔ/D)}, even if D is not known in advance. Finally, we show that (a) there is a dynamic graph such that for every input, no node can decide before time n-O(Δ0.28n0.72); and (b) for any diameter D = O(Δ), there is an execution with diameter D where no node can decide before time Ω(nD / Δ). To our knowledge, our work constitutes the first study of Δ-coordinated consensus in general graphs.
Achievable rates in cognitive radio channels Cognitive radio promises a low-cost, highly flexible alternative to the classic single-frequency band, single-protocol wireless device. By sensing and adapting to its environment, such a device is able to fill voids in the wireless spectrum and can dramatically increase spectral efficiency. In this paper, the cognitive radio channel is defined as a two-sender, two-receiver interference channel in which sender 2 obtains the encoded message sender 1 plans to transmit. We consider two cases: in the genie-aided cognitive radio channel, sender 2 is noncausally presented the data to be transmitted by sender 1 while in the causal cognitive radio channel, the data is obtained causally. The cognitive radio at sender 2 may then choose to transmit simultaneously over the same channel, as opposed to waiting for an idle channel as is traditional for a cognitive radio. Our main result is the development of an achievable region which combines Gel'fand-Pinkser coding with an achievable region construction for the interference channel. In the additive Gaussian noise case, this resembles dirty-paper coding, a technique used in the computation of the capacity of the Gaussian multiple-input multiple-output (MIMO) broadcast channel. Numerical evaluation of the region in the Gaussian noise case is performed, and compared to an inner bound, the interference channel, and an outer bound, a modified Gaussian MIMO broadcast channel. Results are also extended to the case in which the message is causally obtained.
A study of phase noise in CMOS oscillators This paper presents a study of phase noise in two inductorless CMOS oscillators. First-order analysis of a linear oscillatory system leads to a noise shaping function and a new definition of . A linear model of CMOS ring oscillators is used to calculate their phase noise, and three phase noise phenomena, namely, additive noise, high-frequency multiplicative noise, and low-frequency multiplicative noise, are identified and formulated. Based on the same concepts, a CMOS relaxation oscillator is also analyzed. Issues and techniques related to simulation of noise in the time domain are described, and two prototypes fabricated in a 0.5- m CMOS technology are used to investigate the accuracy of the theoretical predictions. Compared with the measured results, the calculated phase noise values of a 2-GHz ring oscillator and a 900-MHz relaxation oscillator at 5 MHz offset have an error of approximately 4 dB. OLTAGE-CONTROLLED oscillators (VCO's) are an integral part of phase-locked loops, clock recovery cir- cuits, and frequency synthesizers. Random fluctuations in the output frequency of VCO's, expressed in terms of jitter and phase noise, have a direct impact on the timing accuracy where phase alignment is required and on the signal-to-noise ratio where frequency translation is performed. In particular, RF oscillators employed in wireless tranceivers must meet stringent phase noise requirements, typically mandating the use of passive LC tanks with a high quality factor . However, the trend toward large-scale integration and low cost makes it desirable to implement oscillators monolithically. The paucity of literature on noise in such oscillators together with a lack of experimental verification of underlying theories has motivated this work. This paper provides a study of phase noise in two induc- torless CMOS VCO's. Following a first-order analysis of a linear oscillatory system and introducing a new definition of , we employ a linearized model of ring oscillators to obtain an estimate of their noise behavior. We also describe the limitations of the model, identify three mechanisms leading to phase noise, and use the same concepts to analyze a CMOS relaxation oscillator. In contrast to previous studies where time-domain jitter has been investigated (1), (2), our analysis is performed in the frequency domain to directly determine the phase noise. Experimental results obtained from a 2-GHz ring oscillator and a 900-MHz relaxation oscillator indicate that, despite many simplifying approximations, lack of accurate MOS models for RF operation, and the use of simple noise
An architecture for survivable coordination in large distributed systems Coordination among processes in a distributed system can be rendered very complex in a large-scale system where messages may be delayed or lost and when processes may participate only transiently or behave arbitrarily, e.g., after suffering a security breach. In this paper, we propose a scalable architecture to support coordination in such extreme conditions. Our architecture consists of a collection of persistent data servers that implement simple shared data abstractions for clients, without trusting the clients or even the servers themselves. We show that, by interacting with these untrusted servers, clients can solve distributed consensus, a powerful and fundamental coordination primitive. Our architecture is very practical and we describe the implementation of its main components in a system called Fleet.
Cross-layer sensors for green cognitive radio. Green cognitive radio is a cognitive radio (CR) that is aware of sustainable development issues and deals with an additional constraint as regards the decision-making function of the cognitive cycle. In this paper, it is explained how the sensors distributed throughout the different layers of our CR model could help on taking the best decision in order to best contribute to sustainable development.
Understanding contention-based channels and using them for defense Microarchitectural resources such as caches and predictors can be used to leak information across security domains. Significant prior work has demonstrated attacks and defenses for specific types of such microarchitectural side and covert channels. In this paper, we introduce a general mathematical study of microarchitectural channels using information theory. Our conceptual contribution is a simple mathematical abstraction that captures the common characteristics of all microarchitectural channels. We call this the Bucket model and it reveals that microarchitectural channels are fundamentally different from side and covert channels in networking. We then quantify the communication capacity of several microarchitectural covert channels (including channels that rely on performance counters, AES hardware and memory buses) and measure bandwidths across both KVM based heavy-weight virtualization and light-weight operating-system level isolation. We demonstrate channel capacities that are orders of magnitude higher compared to what was previously considered possible. Finally, we introduce a novel way of detecting intelligent adversaries that try to hide while running covert channel eavesdropping attacks. Our method generalizes a prior detection scheme (that modeled static adversaries) by introducing noise that hides the detection process from an intelligent eavesdropper.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.110116
0.110116
0.107729
0.079758
0.072997
0.036077
0.003397
0.000234
0
0
0
0
0
0
Memory-efficient FFT architecture using R-LFSR based CORDIC common operator In the Software Defined Radio (SDR) area, parameterization is becoming a very important topic in the design of multi-standard terminals. In this context, the Common Operator (CO) technique defines an open and optimized terminal based on a limited set of generic components called Common Operators. The method was already described in and a new relevant possible CO was presented: R-LFSR based CORDIC which is a result of synergy study between CORDIC and Reconfigurable LFSR. We present in this work an original FFT architecture based on the CORDIC in which R-LFSR is exploited. In this case, FFT functions which were performed by CORDIC can be performed by R-LFSR and vice-versa. The novel FFT architecture was successfully implemented on a FPGA Virtex-4 to compare with a FFT using conventional CORDIC. The complexity evaluation is presented.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Effective Processor Verification with Logic Fuzzer Enhanced Co-simulation ABSTRACTThe study on verification trends in the semiconductor industry shows that the design complexity is increasing, fewer companies achieve first silicon success and need more spins before production, companies hire more verification engineers, and 53% of the whole hardware-design-cycle is spent on the design verification [18]. The cost of a respin is high, and more than 40% of the cases that contribute to it are post-fabrication functional bug exposures [16]. The study also shows that 65% of verification engineers’ time is spent on debug, test creation, and simulation [17]. This paper presents a set of tools for RISC-V processor verification engineers that help to expose more bugs before production and increase the productivity of time spent on debugging, test creation and simulation. We present Logic Fuzzer (LF), a novel tool that expands the verification space exploration without the creation of additional verification tests. The LF randomizes the states or control signals of the design-under-test at the places that do not affect functionality. It brings the processor execution outside its normal flow to increase the number of microarchitectural states exercised by the tests. We also present Dromajo, the state of the art processor verification framework for RISC-V cores. Dromajo is an RV64GC emulator that was designed specifically for co-simulation purposes. It can boot Linux, handle external stimuli, such as interrupts and debug requests on the fly, and can be integrated into existing testbench infrastructure with minimal effort. We evaluate the effectiveness of the tools on three RISC-V cores: CVA6, BlackParrot, and BOOM. Dromajo by itself found a total of nine bugs. The enhancement of Dromajo with the Logic Fuzzer increases the exposed bug count to thirteen without creating additional verification tests.
Threaded code The concept of “threaded code” is presented as an alternative to machine language code. Hardware and software realizations of it are given. In software it is realized as interpretive code not needing an interpreter. Extensions and optimizations are mentioned.
SimpleScalar: An Infrastructure for Computer System Modeling Designers can execute programs on software models to validate a proposed hardware design's performance and correctness, while programmerscan use these models to develop and test software before the real hardwarebecomes available. Three critical requirements drive the implementationof a software model: performance, flexibility, and detail.Performance determines the amount of workload the model can exercise given the machine resources available for simulation. Flexibility indicates how well the model is structured to simplify modification, permitting design variants or even completely different designs to be modeled with ease. Detail defines the level of abstraction used to implement the model's components.The SimpleScalar tool set provides an infrastructure for simulation and architectural modeling. It can model a variety of platforms ranging from simple unpipelined processors to detailed dynamically scheduled microarchitectures with multiple-level memory hierarchies. SimpleScalar simulators reproduce computing device operations by executing all program instructions using an interpreter.The tool set's instruction inter-complex modern machines and effectively manage the large software projects needed to model such machines. Asim addresses these needs by providing a modular and reusable framework for creating many models. The framework's modularity helps break down the performance-modeling problem into individual pieces that can be modeled separately, while its reusability allows using a software component repeatedly in different contexts.
An approach to testing specifications An approach to testing the consistency of specifications is explored, which is applicable to the design validation of communication protocols and other cases of step-wise refinement. In this approach, a testing module compares a trace of interactions obtained from an execution of the refined specification (e. g. the protocol specification) with the reference specification (e. g. the communication service specification). Non-determinism in reference specifications presents certain problems. Using an extended finite state transition model for the specifications, a strategy for limiting the amount of non-determinacy is presented. An automated method for constructing a testing module for a given reference specification is discussed. Experience with the application of this testing approach to the design of a Transport protocol and a distributed mutual exclusion algorithm is described.
Scientific benchmarking of parallel computing systems: twelve ways to tell the masses when reporting performance results Measuring and reporting performance of parallel computers constitutes the basis for scientific advancement of high-performance computing (HPC). Most scientific reports show performance improvements of new techniques and are thus obliged to ensure reproducibility or at least interpretability. Our investigation of a stratified sample of 120 papers across three top conferences in the field shows that the state of the practice is lacking. For example, it is often unclear if reported improvements are deterministic or observed by chance. In addition to distilling best practices from existing work, we propose statistically sound analysis and reporting techniques and simple guidelines for experimental design in parallel computing and codify them in a portable benchmarking library. We aim to improve the standards of reporting research results and initiate a discussion in the HPC field. A wide adoption of our minimal set of rules will lead to better interpretability of performance results and improve the scientific culture in HPC.
OpenFPGA: An Opensource Framework Enabling Rapid Prototyping of Customizable FPGAs Driven by the strong need in data processing applications, Field Programmable Gate Arrays (FPGAs) are playing an ever-increasing role as programmable accelerators in modern computing systems. To fully unlock processing capabilities for domain-specific applications, FPGA architectures have to be tailored for seamless cooperation with other computing resources. However, prototyping and bringing to production a customized FPGA is a costly and complex endeavor even for industrial vendors. In this paper, we introduce OpenFPGA, an opensource framework that enables rapid prototyping of customizable FPGA architectures through a semi-custom design approach. We propose an XML-to-Prototype design flow, where the Verilog netlists of a full FPGA fabric can be autogenerated using an extension of the XML language from the VTR framework and then fed into a back-end flow to generate production-ready layouts. OpenFPGA also includes a general-purpose Verilog-to-Bitstream generator for any FPGA described by the XML language. We demonstrate the capability of this automatic design flow with a Stratix IV-like FPGA architecture using a commercial 40nm technology node, and perform a detailed comparison to its academic and commercial counterparts. Compared to the current state-of-art academic results, our FPGA fabric reduces the area by 1:75 and the delay by 3 on average. In addition, OpenFPGA significantly reduces the gap between semi-custom designed FPGAs and fully-optimized commercial products with a penalty of only 60% in area and 30% in delay, respectively.
Hardware Design with a Scripting Language The Python Hardware Description Language (PyHDL) provides a scripting interface to object-oriented hardware design in C++. PyHDL uses the PamDC and PAM-Blox libraries to generate FPGA circuits. The main advantage of scripting languages is a reduction in development time for high-level designs. We propose a two-step approach: first, use scripting to explore effects of composition and parameterisation; second, convert the scripted designs into compiled components for performance. Our results show that, for small designs, our method offers 5 to 7 times improvement in turnaround time. For a large 10x10 matrix vector multiplier, our method offers respectively 365% and 19% improvement in turnaround time over purely scripting and purely compiled methods.
Randomized algorithms This text by two well-known experts in the field presents the basic concepts in the design and analysis of randomized algorithms at a level accessible to beginning graduate students, professionals and researchers.
Building efficient wireless sensor networks with low-level naming In most distributed systems, naming of nodes for low-level communication leverages topological location (such as node addresses) and is independent of any application. In this paper, we investigate an emerging class of distributed systems where low-level communication does not rely on network topological location. Rather, low-level communication is based on attributes that are external to the network topology and relevant to the application. When combined with dense deployment of nodes, this kind of named data enables in-network processing for data aggregation, collaborative signal processing, and similar problems. These approaches are essential for emerging applications such as sensor networks where resources such as bandwidth and energy are limited. This paper is the first description of the software architecture that supports named data and in-network processing in an operational, multi-application sensor-network. We show that approaches such as in-network aggregation and nested queries can significantly affect network traffic. In one experiment aggregation reduces traffic by up to 42% and nested queries reduce loss rates by 30%. Although aggregation has been previously studied in simulation, this paper demonstrates nested queries as another form of in-network processing, and it presents the first evaluation of these approaches over an operational testbed.
On the evolution of user interaction in Facebook Online social networks have become extremely popular; numerous sites allow users to interact and share content using social links. Users of these networks often establish hundreds to even thousands of social links with other users. Recently, researchers have suggested examining the activity network - a network that is based on the actual interaction between users, rather than mere friendship - to distinguish between strong and weak links. While initial studies have led to insights on how an activity network is structurally different from the social network itself, a natural and important aspect of the activity network has been disregarded: the fact that over time social links can grow stronger or weaker. In this paper, we study the evolution of activity between users in the Facebook social network to capture this notion. We find that links in the activity network tend to come and go rapidly over time, and the strength of ties exhibits a general decreasing trend of activity as the social network link ages. For example, only 30% of Facebook user pairs interact consistently from one month to the next. Interestingly, we also find that even though the links of the activity network change rapidly over time, many graph-theoretic properties of the activity network remain unchanged.
An artificial neural network (p,d,q) model for timeseries forecasting Artificial neural networks (ANNs) are flexible computing frameworks and universal approximators that can be applied to a wide range of time series forecasting problems with a high degree of accuracy. However, despite all advantages cited for artificial neural networks, their performance for some real time series is not satisfactory. Improving forecasting especially time series forecasting accuracy is an important yet often difficult task facing forecasters. Both theoretical and empirical findings have indicated that integration of different models can be an effective way of improving upon their predictive performance, especially when the models in the ensemble are quite different. In this paper, a novel hybrid model of artificial neural networks is proposed using auto-regressive integrated moving average (ARIMA) models in order to yield a more accurate forecasting model than artificial neural networks. The empirical results with three well-known real data sets indicate that the proposed model can be an effective way to improve forecasting accuracy achieved by artificial neural networks. Therefore, it can be used as an appropriate alternative model for forecasting task, especially when higher forecasting accuracy is needed.
Efficiency of a Regenerative Direct-Drive Electromagnetic Active Suspension. The efficiency and power consumption of a direct-drive electromagnetic active suspension system for automotive applications are investigated. A McPherson suspension system is considered, where the strut consists of a direct-drive brushless tubular permanent-magnet actuator in parallel with a passive spring and damper. This suspension system can both deliver active forces and regenerate power due to imposed movements. A linear quadratic regulator controller is developed for the improvement of comfort and handling (dynamic tire load). The power consumption is simulated as a function of the passive damping in the active suspension system. Finally, measurements are performed on a quarter-car test setup to validate the analysis and simulations.
The real-time segmentation of indoor scene based on RGB-D sensor The vision system of the mobile robot is a low-level function that provides the required target information of the current environment for the upper vision tasks. The real-time performance and robustness of object segmentation in cluttered environments is still a serious problem in robot visions. In this paper, a new real-time indoor scene segmentation method based on RGB-D image, is presented and the extracted primary object regions are then used for object recognition. Firstly, this paper accomplishes the depth filtering by the improved traditional filtering method. Then by using improved depth information, the algorithm extracts the foreground and implements the object segmentation of color image at the resolution of 640×480 from Kinect camera. Finally, the segmentation results are applied into the object recognition in indoor scene to validate the effectiveness of scene segmentation. The results of indoor segmentation demonstrate the real-time performance and robustness of the proposed method. In addition, the results of segmentation improve the accuracy of object recognition and reduce time of object recognition in indoor cluttered scene.
A 0.5 V 10-bit 3 MS/s SAR ADC With Adaptive-Reset Switching Scheme and Near-Threshold Voltage-Optimized Design Technique This brief presents a 10-bit ultra-low power energy-efficient successive approximation register (SAR) analog-to-digital converter (ADC). A new adaptive-reset switching scheme is proposed to reduce the switching energy of the capacitive digital-to-analog converter (CDAC). The proposed adaptive-reset switching scheme reduces the average switching energy of the CDAC by 90% compared to the conventional scheme without the common-mode voltage variation. In addition, the near-threshold voltage (NTV)-optimized digital library is adopted to alleviate the performance degradation in the ultra-low supply voltage while simultaneously increasing the energy efficiency. The NTV-optimized design technique is also introduced to the bootstrapped switch design to improve the linearity of the sample-and-hold circuit. The test chip is fabricated in a 65 nm CMOS, and its core area is 0.022 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> . At a supply of 0.5 V and sampling speed of 3 MS/s, the SAR ADC achieves an ENOB of 8.78 bit and consumes <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$3.09~{\boldsymbol{\mu }}\text{W}$ </tex-math></inline-formula> . The resultant Walden figure-of-merit (FoM) is 2.34 fJ/conv.-step.
1.2
0.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
A matrix approach to the modeling and analysis of networked evolutionary games with time delays Using the semi-tensor product method, this paper investigates the modeling and analysis of networked evolutionary games (NEGs) with finite memories, and presents a number of new results. Firstly, a kind of algebraic expression is formulated for the networked evolutionary games with finite memories, based on which the behavior of the corresponding evolutionary game is analyzed. Secondly, under a pr...
On Decomposed Subspaces of Finite Games. This note provides the detailed description of the decomposed subspaces of finite games. First, the basis of potential games and the basis of non-strategic games are revealed. Then the bases of pure potential and pure harmonic subspaces are also obtained. These bases provide an explicit formula for the decomposition, and are convenient for investigating the properties of the corresponding subspaces. As an application, we consider the dynamics of networked evolutionary games (NEGs). Three problems are considered: 1) the dynamic equivalence of evolutionary games; 2) the dynamics of near potential games; and 3) the decomposition of NEGs.
Evolutionary game theoretic demand-side management and control for a class of networked smart grid. In this paper, a new demand-side management problem of networked smart grid is formulated and solved based on evolutionary game theory. The objective is to minimize the overall cost of the smart grid, where individual communities can switch between grid power and local power according to strategies of their neighbors. The distinctive feature of the proposed formulation is that, a small portion of the communities are cooperative, while others pursue their own benefits. This formulation can be categorized as control networked evolutionary game, which can be solved systematically by using semi-tensor product. A new binary optimal control algorithm is applied to optimize transient performances of the networked evolutionary game.
Strategy optimization of weighted networked evolutionary games with switched topologies and threshold In real life, whether individuals or enterprises there have a lot of objects which are related to them, but they will choose the objects they trust to refer to their information and play games with them. In this study, a model is established named the weighted networked evolutionary games(NEGs) with switched topologies, which describes that each player can choose the networks he is eager to attend at different moments, and in the selected network each player will choose the real neighborhood players called trusters, according to his own criteria. Using the semi-tensor product(STP) of matrices, the algebraic representation of the new model’s evolution process is obtained. It is known that players have a minimum payoff threshold to survive, so the strategy optimization algorithm is designed through the method of state feedback control so that players’ strategy choices can reach the threshold as they evolve. An example is given to illustrate the effectiveness of algorithms.
Initial-State Observability of Mealy-Based Finite-State Machine With Nondeterministic Output Functions In mobile systems or the failure detection applications, the output for some input event is state-dependent and nondeterministic after intermittent sensor failures or measurement uncertainties, which does not hold under the conventional observability hypothesis. In this article, such cases can be modeled by a Mealy-based finite-state machine (FSM) with nondeterministic output functions, and we investigate the “initial-state” observability by use of matrix semitensor product (matrix-STP). First, to characterize the nondeterministic output functions, a virtual state set consisting of state–event pairs is introduced to obtain an augmented FSM. By resorting to the matrix-STP, the algebraic expression of augmented FSM is proposed. Subsequently, based on the newly constructed model, the initial-state observability can be verified by checking the distinguishability of state trajectories of the augmented FSM. Meanwhile, the necessary and sufficient condition for such initial-state observability is derived from a discriminant matrix consisting of polynomial elements. Finally, numerical examples show the validity of the proposed method. The current results are further conducive to explore the critical safety of cyber–physical systems in many real-world systems.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Cognitive radio: brain-empowered wireless communications Cognitive radio is viewed as a novel approach for improving the utilization of a precious natural resource: the radio electromagnetic spectrum. The cognitive radio, built on a software-defined radio, is defined as an intelligent wireless communication system that is aware of its environment and uses the methodology of understanding-by-building to learn from the environment and adapt to statistical variations in the input stimuli, with two primary objectives in mind: · highly reliable communication whenever and wherever needed; · efficient utilization of the radio spectrum. Following the discussion of interference temperature as a new metric for the quantification and management of interference, the paper addresses three fundamental cognitive tasks. 1) Radio-scene analysis. 2) Channel-state estimation and predictive modeling. 3) Transmit-power control and dynamic spectrum management. This work also discusses the emergent behavior of cognitive radio.
Planning as heuristic search In the AIPS98 Planning Contest, the hsp planner showed that heuristic search planners can be competitive with state-of-the-art Graphplan and sat planners. Heuristic search planners like hsp transform planning problems into problems of heuristic search by automatically extracting heuristics from Strips encodings. They differ from specialized problem solvers such as those developed for the 24-Puzzle and Rubik's Cube in that they use a general declarative language for stating problems and a general mechanism for extracting heuristics from these representations. In this paper, we study a family of heuristic search planners that are based on a simple and general heuristic that assumes that action preconditions are independent. The heuristic is then used in the context of best-first and hill-climbing search algorithms, and is tested over a large collection of domains. We then consider variations and extensions such as reversing the direction of the search for speeding node evaluation, and extracting information about propositional invariants for avoiding dead-ends. We analyze the resulting planners, evaluate their performance, and explain when they do best. We also compare the performance of these planners with two state-of-the-art planners, and show that the simplest planner based on a pure best-first search yields the most solid performance over a large set of problems. We also discuss the strengths and limitations of this approach, establish a correspondence between heuristic search planning and Graphplan, and briefly survey recent ideas that can reduce the current gap in performance between general heuristic search planners and specialized solvers.
Probabilistic neural networks By replacing the sigmoid activation function often used in neural networks with an exponential function, a probabilistic neural network (PNN) that can compute nonlinear decision boundaries which approach the Bayes optimal is formed. Alternate activation functions having similar properties are also discussed. A fourlayer neural network of the type proposed can map any input pattern to any number of classifications. The decision boundaries can be modified in real-time using new data as they become available, and can be implemented using artificial hardware “neurons” that operate entirely in parallel. Provision is also made for estimating the probability and reliability of a classification as well as making the decision. The technique offers a tremendous speed advantage for problems in which the incremental adaptation time of back propagation is a significant fraction of the total computation time. For one application, the PNN paradigm was 200,000 times faster than back-propagation.
TaintDroid: An Information-Flow Tracking System for Realtime Privacy Monitoring on Smartphones Today’s smartphone operating systems frequently fail to provide users with visibility into how third-party applications collect and share their private data. We address these shortcomings with TaintDroid, an efficient, system-wide dynamic taint tracking and analysis system capable of simultaneously tracking multiple sources of sensitive data. TaintDroid enables realtime analysis by leveraging Android’s virtualized execution environment. TaintDroid incurs only 32&percnt; performance overhead on a CPU-bound microbenchmark and imposes negligible overhead on interactive third-party applications. Using TaintDroid to monitor the behavior of 30 popular third-party Android applications, in our 2010 study we found 20 applications potentially misused users’ private information; so did a similar fraction of the tested applications in our 2012 study. Monitoring the flow of privacy-sensitive data with TaintDroid provides valuable input for smartphone users and security service firms seeking to identify misbehaving applications.
On receding horizon feedback control Receding horizon feedback control (RHFC) was originally introduced as an easy method for designing stable state-feedback controllers for linear systems. Here those results are generalized to the control of nonlinear autonomous systems, and we develop a performance index which is minimized by the RHFC (inverse optimal control problem). Previous results for linear systems have shown that desirable nonlinear controllers can be developed by making the RHFC horizon distance a function of the state. That functional dependence was implicit and difficult to implement on-line. Here we develop similar controllers for which the horizon distance is an easily computed explicit function of the state.
Cross-layer sensors for green cognitive radio. Green cognitive radio is a cognitive radio (CR) that is aware of sustainable development issues and deals with an additional constraint as regards the decision-making function of the cognitive cycle. In this paper, it is explained how the sensors distributed throughout the different layers of our CR model could help on taking the best decision in order to best contribute to sustainable development.
27.9 A 200kS/s 13.5b integrated-fluxgate differential-magnetic-to-digital converter with an oversampling compensation loop for contactless current sensing High voltage applications such as electric motor controllers, solar panel power inverters, electric vehicle battery chargers, uninterrupted and switching mode power supplies benefit from the galvanic isolation of contactless current sensors (CCS) [1]. These include magnetic sensors that sense the magnetic field emanating from a current-carrying conductor. The offset and resolution of Hall-effect sensors is in the μT-level [1-3], in contrast to the μT-level accuracy of integrated-fluxgate (IFG) magnetometers [4]. Previously reported sampled-data closed-loop IFG readouts have limited BWs as their sampling frequencies (4) are limited to be less than or equal to the IFG excitation frequency, fEXC [5-7]. This paper describes a differential closed-loop IFG CCS with fs>fEXC. The differential architecture rejects magnetic stray fields and achieves 750x larger BW than the prior closed-loop IFG readouts [6-7] with 10×better offset than the Hall-effect sensors [1-3].
An Energy-Efficient SAR ADC With Event-Triggered Error Correction This brief presents an energy-efficient fully differential 10-bit successive approximation register (SAR) analog-to-digital converter (ADC) with a resolution of 10 bits and a sampling rate of 320 kS/s. The optimal capacitor split and bypass number is analyzed to achieve the highest switching energy efficiency. The common-mode voltage level remains constant during the MSB-capacitor switching cycles. To minimize nonlinearity due to charge averaging voltage offset or DAC array mismatch, an event-triggered error correction method is employed as a redundant cycle for detecting digital code errors within 1 least significant bit (LSB). A test chip was fabricated using the 180-nm CMOS process and occupied a 0.0564-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> core area. Under a regular 0.65-V supply voltage, the ADC achieved an effective number of bits of 9.61 bits and a figure of merit (FOM) of 6.38 fJ/conversion step, with 1.6- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${ \mu }\text{W}$ </tex-math></inline-formula> power dissipation for a low-frequency input. The measured differential and integral nonlinearity results are within 0.30 LSB and 0.43 LSB, respectively.
1.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
0
Algorithmic Voltage-Feed-In Topology for Fully Integrated Fine-Grained Rational Buck-Boost Switched-Capacitor DC-DC Converters. We propose an algorithmic voltage-feed-in (AVFI) topology capable of systematic generation of any arbitrary buck-boost rational ratio with optimal conduction loss while achieving reduced topology-level parasitic loss among the state-of-the-art works. By disengaging the existing topology-level restrictions, we develop a cell-level implementation using the extracted Dickson cell (DSC) and charge-pat...
General Top/Bottom-Plate Charge Recycling Technique for Integrated Switched Capacitor DC-DC Converters. Energy loss due to top/bottom plate parasitic capacitances is one of the factors determining the efficiency of integrated switched capacitor DC/DC converters. This loss is particularly significant when MOS gate or deep trench capacitors are used. We propose a technique for top/bottom-plate charge recycling that can be applied with low overhead independently of the converter architecture. Two examp...
A 20-pW Discontinuous Switched-Capacitor Energy Harvester for Smart Sensor Applications. We present a discontinuous harvesting approach for switch capacitor dc-dc converters that enables ultralow-power energy harvesting. Smart sensor applications rely on ultralow-power energy harvesters to scavenge energy across a wide range of ambient power levels and charge the battery. Based on the key observation that energy source efficiency is higher than charge pump efficiency, we present a dis...
An Arithmetic Progression Switched-Capacitor DC-DC Converter with Soft VCR Transitions Achieving 93.7% Peak Efficiency and 400 mA Output Current Dynamic source adaptation and supply modulation can benefit the power efficiency and system functionality of energy-harvesting interfaces, voltage-scalable SoCs, device drivers, power amplifiers, and others. A switched-capacitor (SC) DC-DC converter can achieve high power conversion efficiency (PCE) and power density at the hundreds-of-mW. Several reconfigurable SC topologies emerged to generate m...
Regulated Charge Pump With New Clocking Scheme for Smoothing the Charging Current in Low Voltage CMOS Process. A regulated cross-couple charge pump with new charging current smoothing technique is proposed and verified in a 0.18-μm 1.8-V/3.3-V CMOS process. The transient behaviors of 3-stage cross-couple charge pump and the expressions for the charging current are described in detail. The experiment results show that the charging current ripples are reduced by a factor of three through using the proposed n...
A Continuously-Scalable-Conversion-Ratio Step-Up/Down SC Energy-Harvesting Interface With MPPT Enabled by Real-Time Power Monitoring With Frequency-Mapped Capacitor DAC An energy-harvesting interface that incorporates a continuously scalable-conversion-ratio (CSCR) switched-capacitor (SC) dc-dc converter with maximum power point tracking (MPPT) is introduced in this paper. By exploiting unique characteristics of a CSCR SC converter, an MPPT based on the hill climbing algorithm is implemented with a real-time power monitoring scheme with a frequency-mapped samplin...
A Dual-Mode Continuously Scalable-Conversion-Ratio SC Energy Harvesting Interface With SC-Based PFM MPPT and Flying Capacitor Sharing Scheme This article proposes a continuously scalable-conversion-ratio (CSCR) switched-capacitor (SC) energy harvesting interface that extracts power from a thermoelectric generator (TEG), regulates a 0.75-V output load, and manages a 1.2–1.45-V battery. The structure employs the proposed CSCR SC converter to improve the power conversion efficiency up to 7.9% higher than that of the conventional converter...
Software complexity measurement Inappropriate use of software complexity measures can have large, damaging effects by rewarding poor programming practices and demoralizing good programmers. Software complexity measures must be critically evaluated to determine the ways in which they can best be used.
All One Needs to Know about Fog Computing and Related Edge Computing Paradigms: A Complete Survey. With the Internet of Things (IoT) becoming part of our daily life and our environment, we expect rapid growth in the number of connected devices. IoT is expected to connect billions of devices and humans to bring promising advantages for us. With this growth, fog computing, along with its related edge computing paradigms, such as multi-access edge computing (MEC) and cloudlet, are seen as promising solutions for handling the large volume of security-critical and time-sensitive data that is being produced by the IoT. In this paper, we first provide a tutorial on fog computing and its related computing paradigms, including their similarities and differences. Next, we provide a taxonomy of research topics in fog computing, and through a comprehensive survey, we summarize and categorize the efforts on fog computing and its related computing paradigms. Finally, we provide challenges and future directions for research in fog computing.
From Static Distributed Systems to Dynamic Systems A noteworthy advance in distributed computing is due to the recent development of peer-to-peer systems. These systems are essentially dynamic in the sense that no process can get a global knowledge on the system structure. They mainly allow processes to look up for data that can be dynamically added/suppressed in a permanently evolving set of nodes. Although protocols have been developed for such dynamic systems, to our knowledge, up to date no computation model for dynamic systems has been proposed. Nevertheless, there is a strong demand for the de?nition of such models as soon as one wants to develop provably correct protocols suited to dynamic systems. This paper proposes a model for (a class of) dynamic systems. That dynamic model is de?ned by (1) a parameter (an integer denoted a) and (2) two basic communication abstractions (query-response and persistent reliable broadcast). The new parameter a is a threshold value introduced to capture the liveness part of the system (it is the counterpart of the minimal number of processes that do not crash in a static system). To show the relevance of the model, the paper adapts an eventual leader protocol designed for the static model, and proves that the resulting protocol is correct within the proposed dynamic model. In that sense, the paper has also a methodological ?avor, as it shows that simple modi?cations to existing protocols can allow them to work in dynamic systems.
A process calculus for Mobile Ad Hoc Networks We present the @w-calculus, a process calculus for formally modeling and reasoning about Mobile Ad Hoc Wireless Networks (MANETs) and their protocols. The @w-calculus naturally captures essential characteristics of MANETs, including the ability of a MANET node to broadcast a message to any other node within its physical transmission range (and no others), and to move in and out of the transmission range of other nodes in the network. A key feature of the @w-calculus is the separation of a node's communication and computational behavior, described by an @w-process, from the description of its physical transmission range, referred to as an @w-process interface. Our main technical results are as follows. We give a formal operational semantics of the @w-calculus in terms of labeled transition systems and show that the state reachability problem is decidable for finite-control @w-processes. We also prove that the @w-calculus is a conservative extension of the @p-calculus, and that late bisimulation equivalence (appropriately lifted from the @p-calculus to the @w-calculus) is a congruence. Congruence results are also established for a weak version of late bisimulation equivalence, which abstracts away from two types of internal actions: @t-actions, as in the @p-calculus, and @m-actions, signaling node movement. We additionally define a symbolic semantics for the @w-calculus extended with the mismatch operator, along with a corresponding notion of symbolic bisimulation equivalence, and establish congruence results for this extension as well. Finally, we illustrate the practical utility of the calculus by developing and analyzing formal models of a leader election protocol for MANETs and the AODV routing protocol.
Interactive presentation: An FPGA based all-digital transmitter with radio frequency output for software defined radio In this paper, we present the architecture and implementation of an all-digital transmitter with radio frequency output targeting an FPGA device. FPGA devices have been widely adopted in the applications of digital signal processing (DSP) and digital communication. They are typically well suited for the evolving technology of software defined radios (SDR) due to their reconfigurability and programmability. However, FPGA devices are mostly used to implement digital baseband and intermediate frequency (IF) functionalities. Therefore, significant analog and RF components are still needed to fulfill the radio communication requirements. The all-digital transmitter presented in this paper directly synthesizes RF signal in the digital domain, therefore eliminates the need for most of the analog and RF components. The all-digital transmitter consists of one QAM modulator and one RF pulse width modulator (RFPWM). The binary output waveform from RFPWM is centered at 800MHz with 64QAM signaling format. The entire transmitter is implemented using Xilinx Virtex2pro device with on chip multi-gigabit transceiver (MGT). The adjacent channel leakage ratio (ACLR) measured in the 20 MHz passband is 45dB, and the measured error vector magnitude (EVM) is less than 1%. Our work extends the digital implementation of communication applications on an FPGA platform to radio frequency, therefore making a significant evolution towards an ideal SDR.
An efficient low-cost fixed-point digital down converter with modified filter bank In radar system, as the most important part of IF radar receiver, digital down converter (DDC) extracts the baseband signal needed from modulated IF signal, and down-samples the signal with decimation factor of 20. This paper proposes an efficient low-cost structure of DDC, including NCO, mixer and a modified filter bank. The modified filter bank adopts a high-efficiency structure, including a 5-stage CIC filter, a 9-tap CFIR filter and a 15-tap HB filter, which reduces the complexity and cost of implementation compared with the traditional filter bank. Then an optimized fixed-point programming is designed in order to implement DDC on fixed-point DSP or FPGA. The simulation results show that the proposed DDC achieves an expectant specification in application of IF radar receiver.
A 12-Bit Dynamic Tracking Algorithm-Based SAR ADC With Real-Time QRS Detection A 12-bit successive approximation register (SAR) ADC based on dynamic tracking algorithm and a real-time QRS-detection algorithm are proposed. The dynamic tracking algorithm features two tracking windows which are adjacent to prediction interval. This algorithm is able to track down the input signal's variation range and automatically adjust the subrange interval and update prediction code. QRS-complex detection algorithm integrates synchronous time sequential ADC and realtime QRS-detector. The chip is fabricated in a standard 0.13 μm CMOS process with a 0.6 V supply. Measurement results show that proposed ADC exhibits 10.72 effective number of bit (ENOB) and 79.63 dB spur-free dynamic range (SFDR) at 10k Hz sample rate given 41.5 Hz sinusoid input. The DNL and INL are bounded at -0.6/0.62 LSB and -0.67/1.43 LSBs. The ADC achieves FoM of 48 fJ/conversion-step at the best case. Also, the prototype is experimented with ECG signal input and extracts the heart beat signal successfully.
1.066667
0.066667
0.066667
0.066667
0.066667
0.066667
0.033333
0
0
0
0
0
0
0
A Multiwavelet Neural Network-Based Response Surface Method for Structural Reliability Analysis. A new multiwavelet neural network-based response surface method is proposed for efficient structural reliability assessment. Although multiwavelet network can be used for approximating nonlinear functions, its application has been limited to small dimension problems due to computational cost. The new method expands the application of multiwavelet network to moderate dimension by introducing a series of intermediate nodes, and the number of these intermediate nodes is determined by the multiwavelet theory. Thus, a multidimensional function learning problem is transformed into a one-dimensional function learning problem. Four examples involving one stochastic finite element-based reliability problem illustrate the effectiveness of the proposed method, which indicate that the new method is more efficient up to 10 random variables than the classical multilayer perceptron-based response surface method.
Particle Swarm Optimization with Sequential Niche Technique for Dynamic Finite Element Model Updating AbstractDue to uncertainties associated with material properties, structural geometry, boundary conditions, and connectivity of structural parts as well as inherent simplifying assumptions in the development of finite element FE models, actual behavior of structures often differs from model predictions. FE model updating comprises a multitude of techniques that systematically calibrate FE models in order to match experimental results. Updating of structural models can be posed as an optimization problem where model parameters that minimize the errors between the responses of the model and actual structure are sought. However, due to limited number of experimental responses and measurement errors, the optimization problem may have multiple admissible solutions in the search domain. Global optimization algorithms GOAs are useful and efficient tools in such situations as they try to find the globally optimal solution out of many possible local minima, but are not totally immune to missing the right minimum in complex problems such as those encountered in updating. A methodology based on particle swarm optimization PSO, a GOA, with sequential niche technique SNT for FE model updating is proposed and explored in this article. The combination of PSO and SNT enables a systematic search for multiple minima and considerably increases the confidence in finding the global minimum. The method is applied to FE model updating of a pedestrian cable-stayed bridge using modal data from full-scale dynamic testing.
Multiobjective evolutionary algorithms: A survey of the state of the art A multiobjective optimization problem involves several conflicting objectives and has a set of Pareto optimal solutions. By evolving a population of solutions, multiobjective evolutionary algorithms (MOEAs) are able to approximate the Pareto optimal set in a single run. MOEAs have attracted a lot of research effort during the last 20 years, and they are still one of the hottest research areas in the field of evolutionary computation. This paper surveys the development of MOEAs primarily during the last eight years. It covers algorithmic frameworks such as decomposition-based MOEAs (MOEA/Ds), memetic MOEAs, coevolutionary MOEAs, selection and offspring reproduction operators, MOEAs with specific search methods, MOEAs for multimodal problems, constraint handling and MOEAs, computationally expensive multiobjective optimization problems (MOPs), dynamic MOPs, noisy MOPs, combinatorial and discrete MOPs, benchmark problems, performance indicators, and applications. In addition, some future research issues are also presented.
A decentralized modular control framework for robust control of FES-activated walker-assisted paraplegic walking using terminal sliding mode and fuzzy logic control. A major challenge to developing functional electrical stimulation (FES) systems for paraplegic walking and widespread acceptance of these systems is the design of a robust control strategy that provides satisfactory tracking performance. The systems need to be robust against time-varying properties of neuromusculoskeletal dynamics, day-to-day variations, subject-to-subject variations, external dis...
Optimal Partitioning for the Decentralized Thermal Control of Buildings. This paper studies the problem of thermal control of buildings from the perspective of partitioning them into clusters for decentralized control. A measure of deviation in performance between centralized and decentralized control in the model predictive control framework, referred to as the optimality loss factor, is derived. Another quantity called the fault propagation metric is introduced as an indicator of the robustness of any decentralized architecture to sensing or communication faults. A computationally tractable agglomerative clustering approach is then proposed to determine the decentralized control architectures, which provide a satisfactory trade-off between the underlying optimality and robustness objectives. The potential use of the proposed partitioning methodology is demonstrated using simulated examples.
Adaptive Fuzzy Decentralized Output Stabilization for Stochastic Nonlinear Large-Scale Systems With Unknown Control Directions In this paper, an adaptive decentralized fuzzy output feedback stabilization problem is investigated for a class of uncertain stochastic nonlinear large-scale systems. The addressed stochastic nonlinear systems contain unknown nonlinear functions, unknown control direction, and without the measurements of the states. Fuzzy logic systems are used to identify the unknown nonlinear functions, and a fuzzy state filter observer is designed to estimate the unmeasured states. To solve the problem of the unknown control direction in decentralized control design, Nussbaum-type functions are introduced and new property on Nussbaum-type function is proved. Based on the backstepping recursive design technique and the established Nussbaum function property, a new robust stabilization control approach is developed. It is proved that the proposed control approach can guarantee that all the signals of the resulting closed-loop system are bounded in probability, and the observer errors and system output converge to a small neighborhood of the origin. A simulation example is provided to show the effectiveness of the proposed approach.
Agent-based modeling and simulation of a smart grid: A case study of communication effects on frequency control. A smart grid is the next generation power grid focused on providing increased reliability and efficiency in the wake of integration of volatile distributed energy resources. For the development of the smart grid, the modeling and simulation infrastructure is an important concern. This study presents an agent-based model for simulating different smart grid frequency control schemes, such as demand response. The model can be used for combined simulation of electrical, communication and control dynamics. The model structure is presented in detail, and the applicability of the model is evaluated with four distinct simulation case examples. The study confirms that an agent-based modeling and simulation approach is suitable for modeling frequency control in the smart grid. Additionally, the simulations indicate that demand response could be a viable alternative for providing primary control capabilities to the smart grid, even when faced with communication constraints.
Multi-Strategy Coevolving Aging Particle Optimization We propose Multi-Strategy Coevolving Aging Particles (MS-CAP), a novel population-based algorithm for black-box optimization. In a memetic fashion, MS-CAP combines two components with complementary algorithm logics. In the first stage, each particle is perturbed independently along each dimension with a progressively shrinking (decaying) radius, and attracted towards the current best solution with an increasing force. In the second phase, the particles are mutated and recombined according to a multi-strategy approach in the fashion of the ensemble of mutation strategies in Differential Evolution. The proposed algorithm is tested, at different dimensionalities, on two complete black-box optimization benchmarks proposed at the Congress on Evolutionary Computation 2010 and 2013. To demonstrate the applicability of the approach, we also test MS-CAP to train a Feedforward Neural Network modeling the kinematics of an 8-link robot manipulator. The numerical results show that MS-CAP, for the setting considered in this study, tends to outperform the state-of-the-art optimization algorithms on a large set of problems, thus resulting in a robust and versatile optimizer.
Gradient-Based Learning Applied to Document Recognition Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper rev...
Local and global properties in networks of processors (Extended Abstract) This paper attempts to get at some of the fundamental properties of distributed computing by means of the following question: “How much does each processor in a network of processors need to know about its own identity, the identities of other processors, and the underlying connection network in order for the network to be able to carry out useful functions?” The approach we take is to require that the processors be designed without any knowledge (or only very broad knowledge) of the networks they are to be used in, and furthermore, that all processors with the same number of communication ports be identical. Given a particular network function, e.g., setting up a spanning tree, we ask whether processors may be designed so that when they are embedded in any connected network and started in some initial configuration, they are guaranteed to accomplish the desired function.
Mdvm System Concept, Paging Latency And Round-2 Randomized Leader Election Algorithm In Sg The future trend in the computing paradigm is marked by mobile computing based on mobile-client/server architecture connected by wireless communication network. However, the mobile computing systems have limitations because of the resource-thin mobile clients operating on battery power. The MDVM system allows the mobile clients to utilize memory and CPU resources of Server-Groups (SG) to overcome the resource limitations of clients in order to support the high-end mobile applications such as, m-commerce and virtual organization (VO). In this paper the concept ofMDVM system and the architecture of cellular network containing the SG are discussed. A round-2 randomized distributed algorithm is proposed to elect a unique leader and co-leader of the SG. The algorithm is free from any assumption about network topology, buffer space limitations and is based on dynamically elected coordinators eliminating single point of failure. The algorithm is implemented in distributed system setup and the network-paging latency values of wired and wireless networks are measured experimentally. The experimental results demonstrate that in most cases the algorithm successfully terminates in first round and the possibility of second round execution decreases significantly with the increase in the size of SG (vertical bar N-a vertical bar). The overall message complexity of the algorithm is O(vertical bar N-a vertical bar). The comparative study of network-paging latencies indicates that 3G/4G mobile communication systems would support the realization of MDVM system.
Sequential approximation of feasible parameter sets for identification with set membership uncertainty In this paper the problem of approximating the feasible parameter set for identification of a system in a set membership setting is considered. The system model is linear in the unknown parameters. A recursive procedure providing an approximation of the parameter set of interest through parallelotopes is presented, and an efficient algorithm is proposed. Its computational complexity is similar to that of the commonly used ellipsoidal approximation schemes. Numerical results are also reported on some simulation experiments conducted to assess the performance of the proposed algorithm.
A 10-Bit 800-MHz 19-mW CMOS ADC A pipelined ADC employs charge-steering op amps to relax the trade-offs among speed, noise, and power consumption. Applying full-rate nonlinearity and gain error calibration, a prototype realized in 65-nm CMOS technology achieves an SNDR of 52.2 dB at an input frequency of 399.2MHz and an FoM of 53 fJ/conversion-step.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.12
0.12
0.113333
0.113333
0.113333
0.113333
0.113333
0.037778
0
0
0
0
0
0
EMI filter design in motor drives with Common Mode voltage active compensation In this paper the design issues of input electromagnetic interference (EMI) filters for inverter-fed motor drives including motor Common Mode (CM) voltage active compensation are studied. A coordinated design of motor CM-voltage active compensator and input EMI filter allows the drive system to comply with EMC standards and to yield an increased reliability at the same time. Two CM input EMI filters are built and compared. They are, designed, respectively, according to the conventional design procedure and considering the actual impedance mismatching between EMI source and receiver. In both design procedures, the presence of the active compensator is taken into account. The experimental evaluation of both filters' performance is given in terms of compliance of the system to standard limits.
EMI and reliability improvement in DC-fed induction motor drives by filtering techniques This paper presents design issues and realization of a common mode (CM) electromagnetic interference (EMI) filter for a DC-supplied motor drive equipped with an output active CM voltage compensator. The obtained system allows both reliability and EMI of the motor drive to be improved at the same time. In particular, as for reliability, the active CM voltage compensator gives a reduction of the stress on motor bearings; in addition, the input EMI filter, designed taking into account the impedance mismatching between EMI source and receiver in the actual circuit configuration, allows standard limits to be satisfied. Simulation analysis and experimental assessments are given.
Electrical analogous in viscoelasticity •Mechanical models of materials viscoelasticity behavior are approached by fractional calculus.•Electrical analogous circuits of fractional hereditary materials are proposed.•Validation is demonstrated by using modal analysis.•Electrical analogous can help in better revealing the real behavior of fractional hereditary materials.
Computer Modeling of Nickel-Iron Alloy in Power Electronics Applications. Rotational magnetizations of an Ni-Fe alloy are simulated using two different computer modeling approaches, physical and phenomenological. The first one is a model defined using a single hysteron operator based on the Stoner and Wohlfarth theory and the second one is a model based on a suitable system of neural networks. The models are identified and validated using experimental data, and, finally...
A Comprehensive Design Approach to an EMI Filter for a 6-kW Three-Phase Boost Power Factor Correction Rectifier in Avionics Vehicular Systems A compact and efficient design for the electromagnetic interference (EMI) filter stage has become one of the most critical challenges in designing a high-density power converter, particularly for avionic applications. To maintain the regulatory standard requirements, EMI filter design needs to be precisely implemented. However, the attenuation characteristics of common-mode (CM) and differential-m...
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
Why systolic architectures? First Page of the Article
Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies? Suppose we are given a vector f in a class FsubeRopfN , e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision epsi in the Euclidean (lscr2) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector |f| (or of its coefficients in a fixed basis) obeys |f|(n)lesRmiddotn-1p/, where R>0 and p>0. Suppose that we take measurements yk=langf# ,Xkrang,k=1,...,K, where the Xk are N-dimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0<p<1 and with overwhelming probability, our reconstruction ft, defined as the solution to the constraints yk=langf# ,Xkrang with minimal lscr1 norm, obeys parf-f#parlscr2lesCp middotRmiddot(K/logN)-r, r=1/p-1/2. There is a sense in which this result is optimal; it is generally impossible to obtain a higher accuracy from any set of K measurements whatsoever. The methodology extends to various other random measurement ensembles; for example, we show that similar results hold if one observes a few randomly sampled Fourier coefficients of f. In fact, the results are quite general and require only two hypotheses on the measurement ensemble which are detailed
Efficient Cache Attacks on AES, and Countermeasures We describe several software side-channel attacks based on inter-process leakage through the state of the CPU's memory cache. This leakage reveals memory access patterns, which can be used for cryptanalysis of cryptographic primitives that employ data-dependent table lookups. The attacks allow an unprivileged process to attack other processes running in parallel on the same processor, despite partitioning methods such as memory protection, sandboxing, and virtualization. Some of our methods require only the ability to trigger services that perform encryption or MAC using the unknown key, such as encrypted disk partitions or secure network links. Moreover, we demonstrate an extremely strong type of attack, which requires knowledge of neither the specific plaintexts nor ciphertexts and works by merely monitoring the effect of the cryptographic process on the cache. We discuss in detail several attacks on AES and experimentally demonstrate their applicability to real systems, such as OpenSSL and Linux's dm-crypt encrypted partitions (in the latter case, the full key was recovered after just 800 writes to the partition, taking 65 milliseconds). Finally, we discuss a variety of countermeasures which can be used to mitigate such attacks.
Halide: a language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines Image processing pipelines combine the challenges of stencil computations and stream programs. They are composed of large graphs of different stencil stages, as well as complex reductions, and stages with global or data-dependent access patterns. Because of their complex structure, the performance difference between a naive implementation of a pipeline and an optimized one is often an order of magnitude. Efficient implementations require optimization of both parallelism and locality, but due to the nature of stencils, there is a fundamental tension between parallelism, locality, and introducing redundant recomputation of shared values. We present a systematic model of the tradeoff space fundamental to stencil pipelines, a schedule representation which describes concrete points in this space for each stage in an image processing pipeline, and an optimizing compiler for the Halide image processing language that synthesizes high performance implementations from a Halide algorithm and a schedule. Combining this compiler with stochastic search over the space of schedules enables terse, composable programs to achieve state-of-the-art performance on a wide range of real image processing pipelines, and across different hardware architectures, including multicores with SIMD, and heterogeneous CPU+GPU execution. From simple Halide programs written in a few hours, we demonstrate performance up to 5x faster than hand-tuned C, intrinsics, and CUDA implementations optimized by experts over weeks or months, for image processing applications beyond the reach of past automatic compilers.
Noise Analysis and Simulation Method for a Single-Slope ADC With CDS in a CMOS Image Sensor Many mixed-signal circuits are nonlinear time-varying systems whose noise estimation cannot be obtained from the conventional frequency domain noise simulation (FNS). Although the transient noise simulation (TNS) supported by a commercial simulator takes into account nonlinear time-varying characteristics of the circuit, its simulation time is unacceptably long to obtain meaningful noise estimatio...
Practical Timing Side Channel Attacks against Kernel Space ASLR Due to the prevalence of control-flow hijacking attacks, a wide variety of defense methods to protect both user space and kernel space code have been developed in the past years. A few examples that have received widespread adoption include stack canaries, non-executable memory, and Address Space Layout Randomization (ASLR). When implemented correctly (i.e., a given system fully supports these protection methods and no information leak exists), the attack surface is significantly reduced and typical exploitation strategies are severely thwarted. All modern desktop and server operating systems support these techniques and ASLR has also been added to different mobile operating systems recently. In this paper, we study the limitations of kernel space ASLR against a local attacker with restricted privileges. We show that an adversary can implement a generic side channel attack against the memory management system to deduce information about the privileged address space layout. Our approach is based on the intrinsic property that the different caches are shared resources on computer systems. We introduce three implementations of our methodology and show that our attacks are feasible on four different x86-based CPUs (both 32- and 64-bit architectures) and also applicable to virtual machines. As a result, we can successfully circumvent kernel space ASLR on current operating systems. Furthermore, we also discuss mitigation strategies against our attacks, and propose and implement a defense solution with negligible performance overhead.
ΣΔ ADC with fractional sample rate conversion for software defined radio receiver.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.1
0.1
0.1
0.1
0.05
0
0
0
0
0
0
0
0
0
BlackParrot: An Agile Open-Source RISC-V Multicore for Accelerator SoCs This article introduces BlackParrot, which aims to be the default open-source, Linux-capable, cache-coherent, 64-bit RISC-V multicore used by the world. In executing this goal, our research aims to advance the world's knowledge about the “software engineering of hardware.” Although originally bootstrapped by the University of Washington and Boston University via DARPA funding, BlackParrot strives to be community driven and infrastructure agnostic; a multicore which is Pareto optimal in terms of power, performance, area, and complexity. In order to ensure BlackParrot is easy to use, extend, and, most importantly, trust, development is guided by three core principles: Be Tiny, Be Modular, and Be Friendly. Development efforts have prioritized the use of intentional interfaces and modularity and silicon validation as first-order design metrics, so that users can quickly get started and trust that their design will perform as expected when deployed. BlackParrot has been validated in a GlobalFoundries 12-nm FinFET tapeout. BlackParrot is ideal as a standalone Linux processor or as a malleable fabric for an agile accelerator SoC design flow.
RFUZZ: Coverage-Directed Fuzz Testing of RTL on FPGAs Dynamic verification is widely used to increase confidence in the correctness of RTL circuits during the pre-silicon design phase. Despite numerous attempts over the last decades to automate the stimuli generation based on coverage feedback, Coverage Directed Test Generation (CDG) has not found the widespread adoption that one would expect. Based on new ideas from the software testing community around coverage-guided mutational fuzz testing, we propose a new approach to the CDG problem which requires minimal setup and takes advantage of FPGA-accelerated simulation for rapid testing. We provide test input and coverage definitions that allow fuzz testing to be applied to RTL circuit verification. In addition we propose and implement a series of transformation passes that make it feasible to reset arbitrary RTL designs quickly, a requirement for deterministic test execution. Alongside this paper we provide rfuzz, a fully featured implementation of our testing methodology which we make available as open-source software to the research community. An empirical evaluation of RFUZZ shows promising results on archiving coverage for a wide range of different RTL designs ranging from communication IPs to an industry scale 64-bit CPU.
DifuzzRTL: Differential Fuzz Testing to Find CPU Bugs Security bugs in CPUs have critical security impacts to all the computation related hardware and software components as it is the core of the computation. In spite of the fact that architecture and security communities have explored a vast number of static or dynamic analysis techniques to automatically identify such bugs, the problem remains unsolved and challenging largely due to the complex nat...
SimpleScalar: An Infrastructure for Computer System Modeling Designers can execute programs on software models to validate a proposed hardware design's performance and correctness, while programmerscan use these models to develop and test software before the real hardwarebecomes available. Three critical requirements drive the implementationof a software model: performance, flexibility, and detail.Performance determines the amount of workload the model can exercise given the machine resources available for simulation. Flexibility indicates how well the model is structured to simplify modification, permitting design variants or even completely different designs to be modeled with ease. Detail defines the level of abstraction used to implement the model's components.The SimpleScalar tool set provides an infrastructure for simulation and architectural modeling. It can model a variety of platforms ranging from simple unpipelined processors to detailed dynamically scheduled microarchitectures with multiple-level memory hierarchies. SimpleScalar simulators reproduce computing device operations by executing all program instructions using an interpreter.The tool set's instruction inter-complex modern machines and effectively manage the large software projects needed to model such machines. Asim addresses these needs by providing a modular and reusable framework for creating many models. The framework's modularity helps break down the performance-modeling problem into individual pieces that can be modeled separately, while its reusability allows using a software component repeatedly in different contexts.
The ForSpec Temporal Logic: A New Temporal Property-Specification Language In this paper we describe the ForSpec Temporal Logic (FTL), the new temporal property-specification logic of ForSpec, Intel's new formal specification language. The key features of FTL are as follows: it is a linear temporal logic, based on Pnueli's LTL, it is based on a rich set of logical and arithmetical operations on bit vectors to describe state properties, it enables the user to define temporal connectives over time windows, it enables the user to define regular events, which are regular sequences of Boolean events, and then relate such events via special connectives, it enables the user to express properties about the past, and it includes constructs that enable the user to model multiple clock and reset signals, which is useful in the verification of hardware design.
Specification and formal verification of power gating in processors This paper presents a method for specification as well as efficient formal verification of power gating feature of processors. Given an instruction-set architecture model of a processor, as the golden model, and a detailed processor model with power gating feature, we propose an efficient method for equivalence checking of the two models using symbolic simulation and property checking. Our experimental results on a MIPS processor shows that our method reduces the verification time compared to the correspondence checking method at least by 3.4x.
Composable Building Blocks to Open up Processor Design. We present a framework called Composable Modular Design (CMD) to facilitate the design of out-of-order (OOO) processors. In CMD, (1) The interface methods of modules provide instantaneous access and perform atomic updates to the state elements inside the module; (2) Every interface method is guarded, i.e., it cannot be applied unless it is ready; and (3) Modules are composed together by atomic rules which call interface methods of different modules. A rule either successfully updates the state of all the called modules or it does nothing. CMD designs are compiled into RTL which can be run on FPGAs or synthesized using standard ASIC design flows. The atomicity properties of interfaces in CMD ensures composability when selected modules are refined selectively. We show the efficacy of CMD by building a parameterized out-of-order RISC-V processor which boots Linux and runs on FPGAs at 25 MHz to 40 MHz. We also synthesized several variants of it in a 32 nm technology to run at 1 GHz to 1.1 GHz. Performance evaluation shows that our processor beats in-order processors in terms of IPC but will require more architectural work to compete with wider superscalar commercial ARM processors. Modules designed under the CMD framework (e.g., ROB, reservation stations, load store unit) can be used and refined by other implementations. We believe that this realistic framework can revolutionize architectural research and practice as the library of reusable components grows.
The Cost of Application-Class Processing: Energy and Performance Analysis of a Linux-ready 1.7GHz 64bit RISC-V Core in 22nm FDSOI Technology. The open-source RISC-V ISA is gaining traction, both in industry and academia. The ISA is designed to scale from micro-controllers to server-class processors. Furthermore, openness promotes the availability of various open-source and commercial implementations. Our main contribution in this work is a thorough power, performance, and efficiency analysis of the RISC-V ISA targeting baseline class functionality, i.e. supporting the Linux OS and its application environment based on our open-source single-issue in-order implementation of the 64 bit ISA variant (RV64GC) called Ariane. Our analysis is based on a detailed power and efficiency analysis of the RISC-V ISA extracted from silicon measurements and calibrated simulation of an Ariane instance (RV64IMC) taped-out in GlobalFoundries 22 FDX technology. Ariane runs at up to 1.7 GHz and achieves up to 40 Gop/sW peak efficiency. We give insight into the interplay between functionality required for application-class execution (e.g. virtual memory, caches, multiple modes of privileged operation) and energy cost. Our analysis indicates that ISA heterogeneity and simpler cores with a few critical instruction extensions (e.g. packed SIMD) can significantly boost a RISC-V coreu0027s compute energy efficiency.
P-Grid: a self-organizing structured P2P system Abstract: this paper was supported in part bythe National Competence Center in Research on MobileInformation and Communication Systems (NCCR-MICS), acenter supported by the Swiss National Science Foundationunder grant number 5005-67322 and by SNSF grant 2100064994,&quot;Peer-to-Peer Information Systems.&quot;messages. From the responses it (randomly) selectscertain peers to which direct network linksare established
Low-Power Programmable Gain CMOS Distributed LNA A design methodology for low power MOS distributed amplifiers (DAs) is presented. The bias point of the MOS devices is optimized so that the DA can be used as a low-noise amplifier (LNA) in broadband applications. A prototype 9-mW LNA with programmable gain was implemented in a 0.18-/spl mu/m CMOS process. The LNA provides a flat gain, S/sub 21/, of 8 /spl plusmn/ 0.6dB from DC to 6.2 GHz, with an...
A 6.5 GHz wideband CMOS low noise amplifier for multi-band use LNA based on a noise-cancelled common gate topology spans 0.1 to 6.5 GHz with a gain of 19 dB, a NF of 3 dB, and s11 < -10 dB. It is realized in 0.13-mum CMOS and dissipates 12 mW
Observers for a class of Lipschitz systems with extension to H∞ performance analysis In this paper, observer design for a class of Lipschitz nonlinear dynamical systems is investigated. One of the main contributions lies in the use of the differential mean value theorem (DMVT) which allows transforming the nonlinear error dynamics into a linear parameter varying (LPV) system. This has the advantage of introducing a general Lipschitz-like condition on the Jacobian matrix for differentiable systems. To ensure asymptotic convergence, in both continuous and discrete time systems, such sufficient conditions expressed in terms of linear matrix inequalities (LMIs) are established. An extension to H∞ filtering design is obtained also for systems with nonlinear outputs. A comparison with respect to the observer method of Gauthier et al. [A simple observer for nonlinear systems. Applications to bioreactors, IEEE Trans. Automat. Control 37(6) (1992) 875–880] is presented to show that the proposed approach avoids high gain for a class of triangular globally Lipschitz systems. In the last section, academic examples are given to show the performances and some limits of the proposed approach. The last example is introduced with the goal to illustrate good performances on robustness to measurement errors by avoiding high gain.
A 25 dBm Outphasing Power Amplifier With Cross-Bridge Combiners In this paper, we present a 25 dBm Class-D outphasing power amplifier (PA) with cross-bridge combiners. The Class-D PA is designed in a standard 45 nm process while the combiner is implemented on board using lumped elements for flexibilities in testing. Comparing with conventional non-isolated combiners, the elements of the cross-bridge combiner are carefully chosen so that additional resonance network is formed to reduce out-of-phase current, thereby increasing backoff efficiency of the outphasing PA. The Class-D outphasing PA with the proposed combiner is manufactured and measured at both 900 MHz and 2.4 GHz. It achieves 55% peak power-added efficiency (PAE) at 900 MHz and 45% at 2.4 GHz for a single tone input. For a 10 MHz LTE signal with 6 dB PAR, the PAE is 32% at 900 MHz with −39 dBc adjacent channel power ratio (ACPR) and 22% at 2.4 GHz with −33 dBc ACPR. With digital predistortion (DPD), the linearity of the PA at 2.4 GHz is improved further to reach −53 dBc, −50 dBc, −42 dBc ACPR for 10 MHz, 20 MHz, and 2-carrier 20 MHz LTE signals.
Walsh-Hadamard-Based Orthogonal Sampling Technique for Parallel Neural Recording Systems Walsh-Hadamard based orthogonal sampling of signals is studied in this paper, and an application of this technique is presented. Using orthogonal sampling, a single analog-to-digital converter (ADC) only is sufficient to perform parallel (simultaneous) recording from the sensors. Furthermore, employing Walsh functions as modulation signals, the required bandwidth of the ADC in the proposed system is equal to the bandwidth of a time-multiplexed ADC in a system with identical number of recording channels. Therefore, the bandwidth of the ADC in the proposed system is effectively employed and shared among all the channels. The efficient usage of the ADC bandwidth leads to saving power at the ADC stage and reducing the datarate of the output signal compared to state-of-the-art recording systems based on frequency-division multiplexing. This paper presents the orthogonal sampling technique for neural recording in multi-channel recording systems which is implemented with four recording channels using a 0.18 μm technology which results in a power consumption of 1.26 μW/channel at a 0.8 V supply.
1.1
0.1
0.1
0.1
0.1
0.1
0.1
0.05
0
0
0
0
0
0
A General Theory of Injection Locking and Pulling in Electrical Oscillators—Part I: Time-Synchronous Modeling and Injection Waveform Design A general model of electrical oscillators under the influence of a periodic injection is presented. Stemming solely from the autonomy and periodic time variance inherent in all oscillators, the model’s underlying approach makes no assumptions about the topology of the oscillator or the shape of the injection waveform. A single first-order differential equation is shown to be capable of predicting a number of important properties, including the lock range, the relative phase of an injection-locked oscillator, and mode stability. The framework also reveals how the injection waveform can be designed to optimize the lock range. A diverse collection of simulations and measurements, performed on various types of oscillators, serve to verify the proposed theory.
A Low-Noise Self-Oscillating Mixer Using a Balanced VCO Load. A low-noise self-oscillating mixer (SOM) operating from 7.8 to 8.8 GHz is described in this paper. Three different components, the oscillator, the mixer core, and the LNA transconductor stage are assembled in a stacked configuration with full dc current-reuse from the VCO to the mixer to the LNA. The LC-tank oscillator also functions as a double-balanced IF load to the low-noise mixer core. A theoretical expression is given for the conversion gain of the SOM taking into account the time-varying nature of the IF load impedance. Measurements show that the SOM has a minimum DSB noise figure of 4.39 dB and a conversion gain of 11.6 dB. Its input P-1 (dB) is -13.6 dBm and its output P-1 dB is -2.97 dBm, while its IIP3 and OIP3 are -8.3 dBm and +3.3 dBm respectively. The chip consumes 12 mW of dc power and it occupies an area of 0.47 mm(2) without pads.
A General Theory of Injection Locking and Pulling in Electrical Oscillators—Part II: Amplitude Modulation in <inline-formula> <tex-math notation="LaTeX">$LC$ </tex-math></inline-formula> Oscillators, Transient Behavior, and Frequency Division A number of specialized topics within the theory of injection locking and pulling are addressed. The material builds on our impulse sensitivity function (ISF)-based, time-synchronous model of electrical oscillators under the influence of a periodic injection. First, we show how the accuracy of this model for <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$LC$ </tex-math></inline-formula> oscillators under large injection is greatly enhanced by accounting for the injection’s effect on the oscillation amplitude. In doing so, we capture the asymmetry of the lock range as well as the distinct behaviors exhibited by different <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$LC$ </tex-math></inline-formula> oscillator topologies. Existing <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$LC$ </tex-math></inline-formula> oscillator injection locking and pulling theories in the literature are subsumed as special cases. Next, a transient analysis of the dynamics of injection pulling is carried out, both within and outside of the lock range. Finally, we show how our existing framework naturally accommodates locking onto superharmonic and subharmonic injections, leading to several design considerations for injection-locked frequency dividers (ILFDs) and the implementation of a low-power dual-modulus prescaler from an injection-locked ring oscillator. Our theoretical conclusions are supported by simulations and experimental data from a variety of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$LC$ </tex-math></inline-formula> , ring, and relaxation oscillators.
Implicit Common-Mode Resonance in LC Oscillators. The performance of a differential LC oscillator can be enhanced by resonating the common mode of the circuit at twice the oscillation frequency. When this technique is correctly employed, Q-degradation due to the triode operation of the differential pair is eliminated and flicker noise is nulled. Until recently, one or more tail inductors have been used to achieve this common-mode resonance. In th...
A 1.8-GHz LC VCO with 1.3-GHz tuning range and digital amplitude calibration A 1.8-GHz LC VCO designed in a 0.18-/spl mu/m CMOS process achieves a very wide tuning range of 73% and measured phase noise of -123.5 dBc/Hz at a 600-kHz offset from a 1.8-GHz carrier while drawing 3.2 mA from a 1.5-V supply. The impacts of wideband operation on start-up constraints and phase noise are discussed. Tuning range is analyzed in terms of fundamental dimensionless design parameters yie...
Low-Power Quadrature Receivers for ZigBee (IEEE 802.15.4) Applications Two very compact and low power quadrature receivers for ZigBee applications are presented. Area and power savings are obtained through both current reuse and oscillator tank sharing between the I and Q paths. Since this choice can cause I and Q amplitude/phase mismatches, the conversion gain is analyzed and a technique to minimize these errors is implemented. Moreover, since using a single tank ma...
Software complexity measurement Inappropriate use of software complexity measures can have large, damaging effects by rewarding poor programming practices and demoralizing good programmers. Software complexity measures must be critically evaluated to determine the ways in which they can best be used.
P-Grid: a self-organizing structured P2P system Abstract: this paper was supported in part bythe National Competence Center in Research on MobileInformation and Communication Systems (NCCR-MICS), acenter supported by the Swiss National Science Foundationunder grant number 5005-67322 and by SNSF grant 2100064994,&quot;Peer-to-Peer Information Systems.&quot;messages. From the responses it (randomly) selectscertain peers to which direct network linksare established
Encapsulation of parallelism in the Volcano query processing system Volcano is a new dataflow query processing system we have developed for database systems research and education. The uniform interface between operators makes Volcano extensible by new operators. All operators are designed and coded as if they were meant for a single-process system only. When attempting to parallelize Volcano, we had to choose between two models of parallelization, called here the bracket and operator models. We describe the reasons for not choosing the bracket model, introduce the novel operator model, and provide details of Volcano's exchange operator that parallelizes all other operators. It allows intra-operator parallelism on partitioned datasets and both vertical and horizontal inter-operator parallelism. The exchange operator encapsulates all parallelism issues and therefore makes implementation of parallel database algorithms significantly easier and more robust. Included in this encapsulation is the translation between demand-driven dataflow within processes and data-driven dataflow between processes. Since the interface between Volcano operators is similar to the one used in “real,” commercial systems, the techniques described here can be used to parallelize other query processing engines.
Exploiting ILP, TLP, and DLP with the polymorphous TRIPS architecture This paper describes the polymorphous TRIPS architecture which can be configured for different granularities and types of parallelism. TRIPS contains mechanisms that enable the processing cores and the on-chip memory system to be configured and combined in different modes for instruction, data, or thread-level parallelism. To adapt to small and large-grain concurrency, the TRIPS architecture contains four out-of-order, 16-wide-issue Grid Processor cores, which can be partitioned when easily extractable fine-grained parallelism exists. This approach to polymorphism provides better performance across a wide range of application types than an approach in which many small processors are aggregated to run workloads with irregular parallelism. Our results show that high performance can be obtained in each of the three modes--ILP, TLP, and DLP-demonstrating the viability of the polymorphous coarse-grained approach for future microprocessors.
Wideband Balun-LNA With Simultaneous Output Balancing, Noise-Canceling and Distortion-Canceling An inductorless low-noise amplifier (LNA) with active balun is proposed for multi-standard radio applications between 100 MHz and 6 GHz. It exploits a combination of a common-gate (CGH) stage and an admittance-scaled common-source (CS) stage with replica biasing to maximize balanced operation, while simultaneously canceling the noise and distortion of the CG-stage. In this way, a noise figure (NF) close to or below 3 dB can be achieved, while good linearity is possible when the CS-stage is carefully optimized. We show that a CS-stage with deep submicron transistors can have high IIP2, because the nugsldr nuds cross-term in a two-dimensional Taylor approximation of the IDS(VGS, VDS) characteristic can cancel the traditionally dominant square-law term in the IDS(VGS) relation at practical gain values. Using standard 65 nm transistors at 1.2 V supply voltage, we realize a balun-LNA with 15 dB gain, NF < 3.5 dB and IIP2 > +20 dBm, while simultaneously achieving an IIP3 > 0 dBm. The best performance of the balun is achieved between 300 MHz to 3.5 GHz with gain and phase errors below 0.3 dB and plusmn2 degrees. The total power consumption is 21 mW, while the active area is only 0.01 mm2.
Cross-layer sensors for green cognitive radio. Green cognitive radio is a cognitive radio (CR) that is aware of sustainable development issues and deals with an additional constraint as regards the decision-making function of the cognitive cycle. In this paper, it is explained how the sensors distributed throughout the different layers of our CR model could help on taking the best decision in order to best contribute to sustainable development.
A 12.8 GS/s Time-Interleaved ADC With 25 GHz Effective Resolution Bandwidth and 4.6 ENOB This paper presents a 12.8 GS/s 32-way hierarchically time-interleaved SAR ADC with 4.6 ENOB in 65 nm CMOS. The prototype utilizes hierarchical sampling and cascode sampler circuits to enable greater than 25 GHz 3 dB effective resolution bandwidth (ERBW). We further employ a pseudo-differential SAR ADC to save power and area. The core circuit occupies only 0.23 mm 2 and consumes a total of 162 mW from dual 1.2 V/1.1 V supplies. The design achieves a SNDR of 29.4 dB at low frequencies and 26.4 dB at 25 GHz, resulting in a figure-of-merit of 0.79 pJ/conversion-step. As will be further described in the paper, the circuit architecture used in this prototype enables expansion to 25.6 GS/s or 51.2 GS/s via additional interleaving without significantly impacting ERBW.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.2
0.2
0.2
0.1
0.05
0.033333
0
0
0
0
0
0
0
0
Finite-time synchronization of multi-weighted complex dynamical networks with and without coupling delay. Two kinds of multi-weighted complex dynamical networks models with and without coupling delay are respectively considered in this paper. First of all, a finite-time synchronization criterion which ensures that multi-weighted complex dynamical networks with fixed topology and constant coupling realize synchronization in finite time is established by means of Lyapunov functional and state feedback controllers. On the basis of Dini derivative and some inequality techniques, a sufficient condition which guarantees synchronization in finite time of multi-weighted complex dynamical networks with switching topology and constant coupling is acquired. On the other hand, in view of the research results above, we similarly investigate multi-weighted complex dynamical networks with time delayed coupling. Two numerical examples finally are provided to verify the availability of the proposed results. (C) 2017 Elsevier B.V. All rights reserved.
Analysis and adaptive control for robust synchronization and H∞ synchronization of complex dynamical networks with multiple time-delays. In this paper, the robust synchronization and robust H∞ synchronization for complex networks with multiple time-delays are respectively investigated. Firstly, the robust synchronization for the complex network with multiple time-delays is analyzed by using inequality techniques and constructing appropriate Lyapunov functional. Then, an adaptive controller is designed to ensure the robust synchronization for such network. Moreover, considering the most of networks exist external disturbances, we also analyze the robust H∞ synchronization of complex networks with multiple time-delays, and an adaptive controller is developed to guarantee the robust H∞ synchronization for such network. Finally, in order to verify the validity of these acquired results, two numerical examples are provided.
Generalized lag synchronization of multiple weighted complex networks with and without time delay. The generalized lag synchronization of multiple weighted complex dynamical networks with fixed and adaptive couplings is investigated in this paper, respectively. By designing appropriate controller, several synchronization criteria are presented for multiple weighted complex dynamical networks with and without time delay based on the selected Lyapunov functional and inequality techniques. Moreover, an adaptive scheme to update the coupling weights is also developed for ensuring the generalized lag synchronization of multiple weighted complex dynamical networks with and without time delay. Finally, two numerical examples are provided in order to validate effectiveness of the proposed generalized lag synchronization criteria.
Analysis and pinning control for passivity of multi-weighted complex dynamical networks with fixed and switching topologies. In this paper, we respectively discuss passivity and pinning passivity of multi-weighted complex dynamical networks. By employing Lyapunov functional approach, several passivity criteria for the complex dynamical network with fixed topology and multi-weights are established. In addition, under the designed pinning adaptive state feedback controller, some sufficient conditions are obtained to ensure the passivity of the multi-weighted complex dynamical network with fixed topology. Furthermore, similar methods are used to derive several criteria for passivity and pinning passivity of complex dynamical networks with switching topology and multi-weights. Finally, two numerical examples with simulation results are given to show the correctness of the obtained passivity criteria.
Finite-Time Cluster Synchronization of Lur'e Networks: A Nonsmooth Approach. This paper is devoted to the finite-time cluster synchronization issue of nonlinearly coupled complex networks which consist of discontinuous Lur&#39;e systems. On the basis of the definition of Filippov regularization process and the measurable selection theorem, the discontinuously nonlinear function is mapped into a function-valued set, then a measurable function is accordingly selected from the Fi...
Social manufacturing: A survey of the state-of-the-art and future challenges Under the growing trend of personalization and socialization, social manufacturing is an emerging technical and business practice in mass individualization paradigm that allows prosumers to build personalized products and individualized services with their partners through integrating inter-organizational manufacturing service processes. This paper makes a comprehensive literature review and a further discussion on social manufacturing via a constructive methodology. After a clarification on definition of social manufacturing, we make an analysis on current research progress including the business models, implementations architectures and frameworks, case studies, and the key enabling techniques (e.g., big data mining and cyber-physical-social system) for realizing the idea of social manufacturing. The potential impact and future challenges are pointed out as well. It is expected that this review can help readers to gain more understanding on the idea of social manufacturing.
Finite-Time Synchronization of Impulsive Dynamical Networks With Strong Nonlinearity Finite-time synchronization (FTS) of dynamical networks has received much attention in recent years, as it has fast convergence rate and good robustness. Most existing results rely heavily on some global condition such as the Lipschitz condition, which has limitations in describing the strong nonlinearity of most real systems. Dealing with strong nonlinearity in the field of FTS is still a challenging problem. In this article, the FTS problem of impulsive dynamical networks with general nonlinearity (especially strong nonlinearity) is considered. In virtue of the concept of nonlinearity strength that quantizes the network nonlinearity, local FTS criteria are established, where the range of the admissible initial values and the settling time are solved. For the networks with weak nonlinearity, global FTS criteria that unify synchronizing, inactive, and desynchronizing impulses are derived. Differing from most existing studies on FTS, the node system here does not have to satisfy the global Lipschitz condition, therefore covering more situations that are practical. Finally, numerical examples are provided to demonstrate our theoretical results.
Analysis and Pinning Control for Output Synchronization and <inline-formula> <tex-math notation="LaTeX">$\mathcal{H}_{\infty}$ </tex-math></inline-formula> Output Synchronization of Multiweighted Complex Networks The output synchronization and <inline-formula xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${\mathcal {H}_{\infty }}$ </tex-math></inline-formula> output synchronization problems for multiweighted complex network are discussed in this paper. First, we analyze the output synchronization of multiweighted complex network by exploiting Lyapunov functional and Barbalat’s lemma. In addition, some nodes- and edges-based pinning control strategies are developed to ensure the output synchronization of multiweighted complex network. Similarly, the <inline-formula xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${\mathcal {H}_{\infty }}$ </tex-math></inline-formula> output synchronization problem of multiweighted complex network is also discussed. Finally, two numerical examples are presented to verify the correctness of the obtained results.
Input-to-state stability for discrete-time nonlinear systems The input-to-state stability property and iss small-gain theorems are introduced as the cornerstone of new stability criteria for discrete-time nonlinear systems.
Encapsulation of parallelism in the Volcano query processing system Volcano is a new dataflow query processing system we have developed for database systems research and education. The uniform interface between operators makes Volcano extensible by new operators. All operators are designed and coded as if they were meant for a single-process system only. When attempting to parallelize Volcano, we had to choose between two models of parallelization, called here the bracket and operator models. We describe the reasons for not choosing the bracket model, introduce the novel operator model, and provide details of Volcano's exchange operator that parallelizes all other operators. It allows intra-operator parallelism on partitioned datasets and both vertical and horizontal inter-operator parallelism. The exchange operator encapsulates all parallelism issues and therefore makes implementation of parallel database algorithms significantly easier and more robust. Included in this encapsulation is the translation between demand-driven dataflow within processes and data-driven dataflow between processes. Since the interface between Volcano operators is similar to the one used in “real,” commercial systems, the techniques described here can be used to parallelize other query processing engines.
Analysis and modeling of bang-bang clock and data recovery circuits A large-signal piecewise-linear model is proposed for bang-bang phase detectors that predicts characteristics of clock and data recovery circuits such as jitter transfer, jitter tolerance, and jitter generation. The results are validated by 1-Gb/s and 10-Gb/s CMOS prototypes using an Alexander phase detector and an LC oscillator.
Algebraic formulation and strategy optimization for a class of evolutionary networked games via semi-tensor product method Using the semi-tensor product method, this paper investigates the algebraic formulation and strategy optimization for a class of evolutionary networked games with ''myopic best response adjustment'' rule, and presents a number of new results. First, the dynamics of the evolutionary networked game is converted to an algebraic form via the semi-tensor product, and an algorithm is established to construct the algebraic formulation for the game. Second, based on the algebraic form, the dynamical behavior of evolutionary networked games is discussed, and some interesting results are presented. Finally, the strategy optimization problem is considered by adding a pseudo-player to the game, and a free-type control sequence is designed to maximize the average payoff of the pseudo-player. The study of an illustrative example shows that the new results obtained in this paper work very well.
GSWABE: faster GPU-accelerated sequence alignment with optimal alignment retrieval for short DNA sequences In this paper, we present GSWABE, a graphics processing unit GPU-accelerated pairwise sequence alignment algorithm for a collection of short DNA sequences. This algorithm supports all-to-all pairwise global, semi-global and local alignment, and retrieves optimal alignments on Compute Unified Device Architecture CUDA-enabled GPUs. All of the three alignment types are based on dynamic programming and share almost the same computational pattern. Thus, we have investigated a general tile-based approach to facilitating fast alignment by deeply exploring the powerful compute capability of CUDA-enabled GPUs. The performance of GSWABE has been evaluated on a Kepler-based Tesla K40 GPU using a variety of short DNA sequence datasets. The results show that our algorithm can yield a performance of up to 59.1 billions cell updates per second GCUPS, 58.5 GCUPS and 50.3 GCUPS for global, semi-global and local alignment, respectively. Furthermore, on the same system GSWABE runs up to 156.0 times faster than the Streaming SIMD Extensions SSE-based SSW library and up to 102.4 times faster than the CUDA-based MSA-CUDA the first stage in terms of local alignment. Compared with the CUDA-based gpu-pairAlign, GSWABE demonstrates stable and consistent speedups with a maximum speedup of 11.2, 10.7, and 10.6 for global, semi-global, and local alignment, respectively. Copyright © 2014 John Wiley & Sons, Ltd.
A 12-Bit Dynamic Tracking Algorithm-Based SAR ADC With Real-Time QRS Detection A 12-bit successive approximation register (SAR) ADC based on dynamic tracking algorithm and a real-time QRS-detection algorithm are proposed. The dynamic tracking algorithm features two tracking windows which are adjacent to prediction interval. This algorithm is able to track down the input signal's variation range and automatically adjust the subrange interval and update prediction code. QRS-complex detection algorithm integrates synchronous time sequential ADC and realtime QRS-detector. The chip is fabricated in a standard 0.13 μm CMOS process with a 0.6 V supply. Measurement results show that proposed ADC exhibits 10.72 effective number of bit (ENOB) and 79.63 dB spur-free dynamic range (SFDR) at 10k Hz sample rate given 41.5 Hz sinusoid input. The DNL and INL are bounded at -0.6/0.62 LSB and -0.67/1.43 LSBs. The ADC achieves FoM of 48 fJ/conversion-step at the best case. Also, the prototype is experimented with ECG signal input and extracts the heart beat signal successfully.
1.05552
0.052
0.052
0.0474
0.04
0.04
0.04
0.031333
0.000933
0
0
0
0
0
Electing a Leader in Dynamic Networks using Mobile Agents and Local Computations. In dynamic distributed systems, the topology of networks changes over time which makes difficult the design and much harder the proof of distributed algorithms. These unavoidable changes of the topology make the election and the maintenance of the elected leader a complex task. The maintaining problem is not considered in a static context. To encode distributed algorithms, we adopt the local computation model. Distributed algorithms are formally presented by transitions systems. Based on both, the mobile agent paradigm and the local computation model, we present in this paper a distributed algorithm that elects a leader in a tree. A set of topological events that may affect the structure of the tree: we focus on the appearance and the disappearance of places as well as the communication channels. Our goal is to maintain always a tree with a single leader or a forest of trees where each one has his own leader.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Design of A Transformer-Based Reconfigurable Digital Polar Doherty Power Amplifier Fully Integrated in Bulk CMOS This paper presents a digital polar Doherty power amplifier (PA) fully integrated in a 65 nm bulk CMOS process. It achieves +27.3 dBm peak output power and 32.5% peak PA drain efficiency at 3.82 GHz and 3.60 GHz, respectively. The proposed digital Doherty PA architecture optimizes the cooperation of the main and auxiliary amplifiers and achieves superior back-off efficiency enhancement (a maximum 47.9% improvement versus the corresponding Class-B operation). This digital intensive architecture also allows in-field PA reconfigurability which both provides robust PA operation against antenna mismatches and allows flexible trade-off optimization on PA efficiency and linearity. Transformer-based passives are employed as the Doherty input and output networks. The input 90 signal splitter is realized by a 6-port folded differential transformer structure. The active Doherty load modulation and power combining at the PA output are achieved by two transformers in a parallel configuration. These transformer-based passives ensure an ultra-compact PA design (2.1 mm ) and broad bandwidth (24.9% for 1 dB P bandwidth). Measurement with 1 MSym/s QPSK signal shows 3.5% rms EVM with +23.5 dBm average output power and 26.8% PA drain efficiency. Measurement with 16-QAM signal exhibits the PA's flexibility on optimizing efficiency and linearity.
An Incremental-Charge-Based Digital Transmitter With Built-in Filtering A fully integrated transmitter architecture operating in the charge-domain with incremental signaling is presented. The architecture provides improved out-of-band noise performance, thanks to an intrinsic low-pass noise filtering capability, reduced quantization noise scaled by capacitance ratios, and sinc 2 alias attenuation due to a quasi-linear reconstruction interpolation. With a respective un...
Quantization Noise Suppression in Digitally Segmented Amplifiers In this paper, we consider the problem of out-of-band quantization noise suppression in the general family of direct digital-to-RF (DDRF) conversion circuits, where the RF carrier is amplitude modulated by a quantized representation of the baseband signal. Hence, it is desired to minimize the out-of-band quantization noise in order to meet stringent requirements such as receive-band noise levels in frequency-division duplex transceivers. In this paper, we address the problem of out-of-band quantization noise by introducing a novel signal-processing solution, which we refer to as ldquosegmented filtering (SF).rdquo We assess the capability of the proposed SF solution by means of performance analysis and results that have been obtained via circuit-level computer simulations as well as laboratory measurements. Our proposed approach has demonstrated the ability to preserve the required signal quality and power amplifier (PA) efficiency while providing more than 35-dB attenuation of the quantization noise, thus eliminating the need for substantial post-PA passband RF filtering.
Split-Array, C-2C Switched-Capacitor Power Amplifiers. This paper presents a 13-b C-2C split-array (SA) multiphase switched-capacitor power amplifier (SAMP-SCPA) implemented in 65-nm CMOS. The SAMP-SCPA was designed for 16-b resolution to offer extra states for linearization/calibration using digital pre-distortion (DPD). Resolution limits for SA SCPAs are presented. The SAMP-SCPA allows for the improvement of the SCPA resolution while minimizing the ...
Efficient Digital Quadrature Transmitter Based on IQ Cell Sharing. In this paper, we proposed and designed a digitally configured versatile RF quadrature transmitter. The transmitter efficiency was enhanced by IQ cell sharing and the deactivation of cells of opposite phases. In simulation, these techniques were able to increase the average efficiency of the transmitter from 46.3% to 70.7% for a 6.9-dB PAPR LTE signal. Moreover, the number of power amplifying cell...
Design Considerations for a Direct Digitally Modulated WLAN Transmitter With Integrated Phase Path and Dynamic Impedance Modulation. A 65-nm digitally modulated polar TX for WLAN 802.11g is fully integrated along with baseband digital filtering. The TX employs dynamic impedance modulation to improve efficiency at back-off powers. High-bandwidth phase modulation is achieved efficiently with an open-loop architecture. Operating from 1.2-V/1-V supplies, the TX delivers 16.8 dBm average power at -28-dB EVM with 24.5% drain efficien...
A Fully-Integrated High-Power Linear CMOS Power Amplifier With a Parallel-Series Combining Transformer. In this paper, a linear CMOS power amplifier (PA) with high output power (34-dBm saturated output power) for high data-rate mobile applications is introduced. The PA incorporates a parallel combination of four differential PA cores to generate high output power with good efficiency and linearity. To implement an efficient on-chip power combiner in a small form-factor, we propose a parallel-series ...
A Class-E PA With Pulse-Width and Pulse-Position Modulation in 65 nm CMOS A class-E power amplifier (PA) utilizes differential switches and a tuned passive output network improves power-added efficiency (PAE) and insensitivity to amplitude variations at its input. A modulator is introduced that takes outphased waveforms as its inputs and generates a pulse-width and pulse-position modulated (PWPM) signal as its output. The PWPM modulator is used in conjunction with a class-E PA to efficiently amplify constant envelope (e.g., GMSK) and non-constant envelope (e.g., QPSK, QAM, OFDM) signals with moderate peak-to-average ratios (PAR). The measured maximum output power of the PA is 28.6 dBm with a PAE of 28.5%, and the measured error vector magnitude (EVM) is 1.2% and 4.6% for GMSK and pi/4-DQPSK (PAR ap 4 dB) modulated signals, respectively.
Measuring the Gap Between FPGAs and ASICs ABSTRACT This paper presents experimental measurements of the differences between a 90nm CMOS FPGA and 90nm CMOS Standard Cell ASICs in terms of logic density, circuit speed and power consumption. We are motivated to make these measurements to enable system designers to make better informed choices between these two media and to give insight to FPGA makers on the deciencies to attack and thereby improve FPGAs. In the paper, we describe the methodology by which the measurements were obtained and we show that, for circuits containing only combinational logic and,ipops, the ratio of silicon area required to implement them in FPGAs and ASICs is on average 40. Modern FPGAs also contain \hard" blocks such as multiplier/accumulators and block memories,and we nd,that these blocks reduce this average area gap signican tly to as little as 21. The ratio of critical path delay, from FPGA to ASIC, is roughly 3 to 4, with less inuence from block memory and hard multipliers. The dynamic power consumption ratio is approximately 12 times and, with hard blocks, this gap generally becomes smaller. Categories and Subject Descriptors
Development of Integrated Broad-Band CMOS Low-Noise Amplifiers This paper presents a systematic design methodology for broad-band CMOS low-noise amplifiers (LNAs). The feedback technique is proposed to attain a better design tradeoff between gain and noise. The network synthesis is adopted for the implementation of broad-band matching networks. The sloped interstage matching is used for gain compensation. A fully integrated ultra-wide-band 0.18-mum CMOS LNA i...
A new concept for wireless reconfigurable receivers In this article we present the Self-Adaptive Universal Receiver (SAUR), a novel wireless reconfigurable receiver architecture. This scheme is based on blind recognition of the system in use, operating on a new radio interface comprising two functional phases. The first phase performs a wideband analysis (WBA) on the received signal to determine its standard. The second phase corresponds to demodulation. Here we only focus on the WBA phase, which consists of an iterative process to find the bandwidth compatible with the associated signal processing techniques. The blind standard recognition performed in the last iteration step of this process uses radial basis function neural networks. This allows a strong analogy between our approach and conventional pattern recognition problems. The efficiency of this type of blind recognition is illustrated with the results of extensive simulations performed in our laboratory using true data of received signals.
Bidirectional current-mode capacitor multiplier in DC-DC converter compensation Bidirectional current-mode capacitor multipliers for on-chip DC-DC converter compensation are presented in this paper. The increasing demand for portable devices is a driving force toward higher integration. Reducing physical area with the same or better performance is carried out. Based on TSMC 0.35μm technology, we demonstrate that a small capacitor is multiplied by a factor about 200. It allows the control system compensating circuit of DC-DC converter be easily integrated on a chip and occupy less silicon area. The experimental results show that the DC-DC converter with proposed architecture is stable for wide loading conditions from 10mA to 400mA while input voltage is 3.3V and output voltage is 2.0V. The quiescent current consumed is 9μA by single-ended capacitor multiplier and 19μA by two-ended capacitor multiplier.
Understanding the regenerative comparator circuit The regenerative comparator circuit which lies at the heart of A/D conversion, slicer circuits, and memory sensing, is unstable, time-varying, nonlinear, and with multiple equilibria. That does not mean, as this paper shows, that it cannot be understood with simple equivalent circuits that reveal its dynamics completely, and enable it to be designed to specifications on static and dynamic offset and noise. The analysis is applied to the StrongArm latch.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1.020902
0.021833
0.021177
0.02
0.02
0.01415
0.01
0.002311
0
0
0
0
0
0
Automatic RTL Test Generation from SystemC TLM Specifications SystemC transaction-level modeling (TLM) is widely used to enable early exploration for both hardware and software designs. It can reduce the overall design and validation effort of complex system-on-chip (SOC) architectures. However, due to lack of automated techniques coupled with limited reuse of validation efforts between abstraction levels, SOC validation is becoming a major bottleneck. This article presents a novel top-down methodology for automatically generating register transfer-level (RTL) tests from SystemC TLM specifications. It makes two important contributions: i) it proposes a method that can automatically generate TLM tests using various coverage metrics, and (ii) it develops a test refinement specification for automatically converting TLM tests to RTL tests in order to reduce overall validation effort. We have developed a tool which incorporates these activities to enable automated RTL test generation from SystemC TLM specifications. Case studies using a router example and a 64-bit Alpha AXP pipelined processor demonstrate that our approach can achieve intended functional coverage of the RTL designs, as well as capture various functional errors and inconsistencies between specifications and implementations.
Systematic software-based self-test for pipelined processors Software-based self-test (SBST) has recently emerged as an effective methodology for the manufacturing test of processors and other components in systems-on-chip (SoCs). By moving test related functions from external resources to the SoC's interior, in the form of test programs that the on-chip processor executes, SBST significantly reduces the need for high-cost, big-iron testers, and enables high-quality at-speed testing and performance binning. Thus far, SBST approaches have focused almost exclusively on the functional (programmer visible) components of the processor. In this paper, we analyze the challenges involved in testing an important component of modern processors, namely, the pipelining logic, and propose a systematic SBST methodology to address them. We first demonstrate that SBST programs that only target the functional components of the processor are not sufficient to test the pipeline logic, resulting in a significant loss of overall processor fault coverage. We further identify the testability hotspots in the pipeline logic using two fully pipelined reduced instruction set computer (RISC) processor benchmarks. Finally, we develop a systematic SBST methodology that enhances existing SBST programs so that they comprehensively test the pipeline logic. The proposed methodology is complementary to previous SBST techniques that target functional components (their results can form the input to our methodology, and thus we can reuse the test development effort behind preexisting SBST programs). We automate our methodology and incorporate it in an integrated software environment (developed using Java, XML, and archC) for the automatic generation of SBST routines for microprocessors. We apply the methodology to the two complex benchmark RISC processors with respect to two fault models: stuck-at fault model and transition delay fault model. Simulation results show that our methodology provides significant improvements for the two fault models, both for the ent- - ire processor (12% fault coverage improvement on average) and for the pipeline logic itself (19% fault coverage improvement on average), compared to a conventional SBST approach.
The ForSpec Temporal Logic: A New Temporal Property-Specification Language In this paper we describe the ForSpec Temporal Logic (FTL), the new temporal property-specification logic of ForSpec, Intel's new formal specification language. The key features of FTL are as follows: it is a linear temporal logic, based on Pnueli's LTL, it is based on a rich set of logical and arithmetical operations on bit vectors to describe state properties, it enables the user to define temporal connectives over time windows, it enables the user to define regular events, which are regular sequences of Boolean events, and then relate such events via special connectives, it enables the user to express properties about the past, and it includes constructs that enable the user to model multiple clock and reset signals, which is useful in the verification of hardware design.
Accelerating microprocessor silicon validation by exposing ISA diversity Microprocessor design validation is a time consuming and costly task that tends to be a bottleneck in the release of new architectures. The validation step that detects the vast majority of design bugs is the one that stresses the silicon prototypes by applying huge numbers of random tests. Despite its bug detection capability, this step is constrained by extreme computing needs for random tests simulation to extract the bug-free memory image for comparison with the actual silicon image. We propose a self-checking method that accelerates silicon validation and significantly increases the number of applied random tests to improve bug detection efficiency and reduce time-to-market. Analysis of four major ISAs (ARM, MIPS, PowerPC, and x86) reveals their inherent diversity: more than three quarters of the instructions can be replaced with equivalent instructions. We exploit this property in post-silicon validation and propose a methodology for the generation of random tests that detect bugs by comparing results of equivalent instructions. We support our bug detection method in hardware with a light-weight mechanism which, in case of a mismatch, replays the random test replacing the offending instruction with its equivalent. Our bug detection method and corresponding hardware significantly accelerate the post-silicon validation process. Evaluation of the method on an x86 microprocessor model demonstrates its efficiency over simulation-based and self-checking alternatives, in terms of bug detection capabilities and validation time speedup.
Specification and formal verification of power gating in processors This paper presents a method for specification as well as efficient formal verification of power gating feature of processors. Given an instruction-set architecture model of a processor, as the golden model, and a detailed processor model with power gating feature, we propose an efficient method for equivalence checking of the two models using symbolic simulation and property checking. Our experimental results on a MIPS processor shows that our method reduces the verification time compared to the correspondence checking method at least by 3.4x.
Run-time hardware trojan detection using performance counters There has been a growing trend in recent years to outsource various aspects of the semiconductor design and manufacturing flow to different parties spread across the globe. Such outsourcing increases the risk of adversaries adding malicious logic, referred to as hardware Trojans, to the original design. In this paper, we introduce a run-time hardware Trojan detection method for microprocessor cores. This approach uses Half-space trees to detect the activation of Trojans that introduce abnormal patterns in the data streams obtained from performance counters. It does not require any additional hardware or the monitoring of a large number of internal signals. We evaluate our method by detecting the activation of Trojans that cause denial-of-service, the degradation of system performance, and change in functionality of a microprocessor core. Results obtained using the OpenSPARC T1 core and an FPGA prototyping framework show that Trojan activation is detected with true positive ratio of above 0.9 and a false positive ratio of below 0.1 for most of the implemented Trojans.
DifuzzRTL: Differential Fuzz Testing to Find CPU Bugs Security bugs in CPUs have critical security impacts to all the computation related hardware and software components as it is the core of the computation. In spite of the fact that architecture and security communities have explored a vast number of static or dynamic analysis techniques to automatically identify such bugs, the problem remains unsolved and challenging largely due to the complex nat...
A Bayesian Method for the Induction of Probabilistic Networks from Data This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction of probabilistic expert systems. We extend the basic method to handle missing data and hidden (latent) variables. We show how to perform probabilistic inference by averaging over the inferences of multiple belief networks. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. Finally, we relate the methods in this paper to previous work, and we discuss open problems.
Cost Efficient Resource Management in Fog Computing Supported Medical Cyber-Physical System. With the recent development in information and communication technology, more and more smart devices penetrate into people&#39;s daily life to promote the life quality. As a growing healthcare trend, medical cyber-physical systems (MCPSs) enable seamless and intelligent interaction between the computational elements and the medical devices. To support MCPSs, cloud resources are usually explored to pro...
Information spreading in stationary Markovian evolving graphs Markovian evolving graphs [2] are dynamic-graph models where the links among a fixed set of nodes change during time according to an arbitrary Markovian rule. They are extremely general and they can well describe important dynamic-network scenarios.
Design of a Pressure Control System With Dead Band and Time Delay This paper investigates the control of pressure in a hydraulic circuit containing a dead band and a time varying delay. The dead band is considered as a linear term and a perturbation. A sliding mode controller is designed. Stability conditions are established by making use of Lyapunov Krasovskii functionals, non-perfect time delay estimation is studied and a condition for the effect of uncertainties on the dead zone on stability is derived. Also the effect of different LMI formulations on conservativeness is studied. The control law is tested in practice.
Investigation of the Energy Regeneration of Active Suspension System in Hybrid Electric Vehicles This paper investigates the idea of the energy regeneration of active suspension (AS) system in hybrid electric vehicles (HEVs). For this purpose, extensive simulation and control methods are utilized to develop a simultaneous simulation in which both HEV powertrain and AS systems are simulated in a unified medium. In addition, a hybrid energy storage system (ESS) comprising electrochemical batteries and ultracapacitors (UCs) is proposed for this application. Simulation results reveal that the regeneration of the AS energy results in an improved fuel economy. Moreover, by using the hybrid ESS, AS load fluctuations are transferred from the batteries to the UCs, which, in turn, will improve the efficiency of the batteries and increase their life.
PuDianNao: A Polyvalent Machine Learning Accelerator Machine Learning (ML) techniques are pervasive tools in various emerging commercial applications, but have to be accommodated by powerful computer systems to process very large data. Although general-purpose CPUs and GPUs have provided straightforward solutions, their energy-efficiencies are limited due to their excessive supports for flexibility. Hardware accelerators may achieve better energy-efficiencies, but each accelerator often accommodates only a single ML technique (family). According to the famous No-Free-Lunch theorem in the ML domain, however, an ML technique performs well on a dataset may perform poorly on another dataset, which implies that such accelerator may sometimes lead to poor learning accuracy. Even if regardless of the learning accuracy, such accelerator can still become inapplicable simply because the concrete ML task is altered, or the user chooses another ML technique. In this study, we present an ML accelerator called PuDianNao, which accommodates seven representative ML techniques, including k-means, k-nearest neighbors, naive bayes, support vector machine, linear regression, classification tree, and deep neural network. Benefited from our thorough analysis on computational primitives and locality properties of different ML techniques, PuDianNao can perform up to 1056 GOP/s (e.g., additions and multiplications) in an area of 3.51 mm^2, and consumes 596 mW only. Compared with the NVIDIA K20M GPU (28nm process), PuDianNao (65nm process) is 1.20x faster, and can reduce the energy by 128.41x.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.084444
0.066667
0.066667
0.066667
0.066667
0.066667
0.033333
0
0
0
0
0
0
0
CMOS Interface for Capacitive Sensors with Custom Fully-Differential Amplifiers In many applications it is crucial to design reliable and efficient analog readout circuits for micro-electromechanical (MEMS) capacitive sensors. In this paper, we describe the switched-capacitor, open-loop, capacitive-sensing readout circuit, which was designed and manufactured in 0.18 μm technology. Non-standard application of a fully differential amplifier structure is also presented. The post-layout simulation results are described to show the proper operation of the circuit. They show that with the proper symmetrical design of the differential signal path the output offset voltage can be kept at acceptable level.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Information Spreading in Stationary Markovian Evolving Graphs Markovian evolving graphs are dynamic-graph models where the links among a fixed set of nodes change during time according to an arbitrary Markovian rule. They are extremely general and they can well describe important dynamic-network scenarios. We study the speed of information spreading in the stationary phase by analyzing the completion time of the flooding mechanism. We prove a general theorem that establishes an upper bound on flooding time in any stationary Markovian evolving graph in terms of its node-expansion properties. We apply our theorem in two natural and relevant cases of such dynamic graphs. Geometric Markovian evolving graphs where the Markovian behaviour is yielded by n mobile radio stations, with fixed transmission radius, that perform independent random walks over a square region of the plane. Edge-Markovian evolving graphs where the probability of existence of any edge at time t depends on the existence (or not) of the same edge at time t-1. In both cases, the obtained upper bounds hold with high probability and they are nearly tight. In fact, they turn out to be tight for a large range of the values of the input parameters. As for geometric Markovian evolving graphs, our result represents the first analytical upper bound for flooding time on a class of concrete mobile networks.
Exploration of Constantly Connected Dynamic Graphs Based on Cactuses. We study the problem of exploration by a mobile entity (agent) of a class of dynamic networks, namely constantly connected dynamic graphs. This problem has already been studied in the case where the agent knows the dynamics of the graph and the underlying graph is a ring of n vertices [5]. In this paper, we consider the same problem and we suppose that the underlying graph is a cactus graph (a connected graph in which any two simple cycles have at most one vertex in common). We propose an algorithm that allows the agent to explore these dynamic graphs in at most 2(O)(root log n)(n) time units. We show that the lower bound of the algorithm is 2(Omega)(root log n)(n) time units.
Efficient routing in carrier-based mobile networks The past years have seen an intense research effort directed at study of delay/disruption tolerant networks and related concepts (intermittently connected networks, opportunistic mobility networks). As a fundamental primitive, routing in such networks has been one of the research foci. While multiple network models have been proposed and routing in them investigated, most of the published results are of heuristic nature with experimental validation; analytical results are scarce and apply mostly to networks whose structure follows deterministic schedule. In this paper, we propose a simple model of opportunistic mobility network based on oblivious carriers, and investigate the routing problem in such networks. We present an optimal online routing algorithm and compare it with a simple shortest-path inspired routing and optimal offline routing. In doing so, we identify the key parameters (the minimum non-zero probability of meeting among the carrier pairs, and the number of carriers a given carrier comes into contact) driving the separation among these algorithms.
Shortest, Fastest, And Foremost Broadcast In Dynamic Networks Highly dynamic networks rarely offer end-to-end connectivity at a given time. Yet, connectivity in these networks can be established over time and space, based on temporal analogues of multi-hop paths (also called journeys). Attempting to optimize the selection of the journeys in these networks naturally leads to the study of three cases: shortest (minimum hop), fastest (minimum duration), and foremost (earliest arrival) journeys. Efficient centralized algorithms exists to compute all cases, when the full knowledge of the network evolution is given.In this paper, we study the distributed counterparts of these problems, i.e. shortest, fastest, and foremost broadcast with termination detection (TDB), with minimal knowledge on the topology. We show that the feasibility of each of these problems requires distinct features on the evolution, through identifying three classes of dynamic graphs wherein the problems become gradually feasible: graphs in which the re-appearance of edges is recurrent (class R.), bounded-recurrent (B), or periodic (p), together with specific knowledge that are respectively n (the number of nodes), Delta (a bound on the recurrence time), and p (the period). In these classes it is not required that all pairs of nodes get in contact only that the overall footprint of the graph is connected over time. Our results, together with the strict inclusion between P, B, and R, implies a feasibility order among the three variants of the problem, i.e. TDB[foremost] requires weaker assumptions on the topology dynamics than TDB[shortest], which itself requires less than TDB[fastest]. Reversely, these differences in feasibility imply that the computational powers of R-n, B-Delta, and P-p also form a strict hierarchy.
Agreement in directed dynamic networks We study the fundamental problem of achieving consensus in a synchronous dynamic network, where an omniscient adversary controls the unidirectional communication links. Its behavior is modeled as a sequence of directed graphs representing the active (i.e. timely) communication links per round. We prove that consensus is impossible under some natural weak connectivity assumptions, and introduce vertex-stable root components as a--practical and not overly strong--means for circumventing this impossibility. Essentially, we assume that there is a short period of time during which an arbitrary part of the network remains strongly connected, while its interconnect topology keeps changing continuously. We present a consensus algorithm that works under this assumption, and prove its correctness. Our algorithm maintains a local estimate of the communication graphs, and applies techniques for detecting stable network properties and univalent system configurations. Our possibility results are complemented by several impossibility results and lower bounds, which reveal that our algorithm is asymptotically optimal.
Computing Shortest, Fastest, and Foremost Journeys in Dynamic Networks ABSTRACT New technologies and the deployment of mobile and nomadic,services are driving the emergence of complex communications networks, that have a highly dynamic behavior. This naturally engenders new route-discovery problems under changing conditions over these networks. Unfortunately, the temporal variations in the network topology are hard to be eectively captured in a classical graph model. In this paper, we use and extend a recently proposed graph theoretic model, which helps capture the evolving characteristic of such networks, in order to propose and formally analyze least cost journeys (the analog of paths in usual graphs) in a class of dynamic networks, where the changes in the topology can be predicted in advance. Cost measures investigated here are hop count (shortest journeys), arrival date (foremost journeys), and time span (fastest journeys). Keywords: dynamic networks, routing, evolving graphs, graph theoretical models, LEO satellite networks, fixed-schedule dynamic networks This work was partially supported by the Color action Dynamic and the European FET
How to Explore a Fast-Changing World (Cover Time of a Simple Random Walk on Evolving Graphs) Motivated by real world networks and use of algorithms based on random walks on these networks we study the simple random walks on dynamic undirected graphs with fixed underlying vertex set, i.e., graphs which are modified by inserting or deleting edges at every step of the walk. We are interested in the expected time needed to visit all the vertices of such a dynamic graph, the cover time, under the assumption that the graph is being modified by an oblivious adversary. It is well known that on connected static undirected graphs the cover time is polynomial in the size of the graph. On the contrary and somewhat counter-intuitively, we show that there are adversary strategies which force the expected cover time of a simple random walk on connected dynamic graphs to be exponential. We relate this result to the cover time of static directed graphs. In addition we provide a simple strategy, the lazy random walk, that guarantees polynomial cover time regardless of the changes made by the adversary.
Impossibility of distributed consensus with one faulty process The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the “Byzantine Generals” problem.
Consensus problems in networks of agents with switching topology and time-delays. In this paper, we discuss consensus problems for a network of dynamic agents with flxed and switching topologies. We analyze three cases: i) networks with switching topology and no time-delays, ii) networks with flxed topology and communication time-delays, and iii) max-consensus problems (or leader determination) for groups of discrete-time agents. In each case, we introduce a linear/nonlinear consensus protocol and provide convergence analysis for the proposed distributed algorithm. Moreover, we establish a connection between the Fiedler eigenvalue of the information ∞ow in a network (i.e. algebraic connectivity of the network) and the negotiation speed (or performance) of the corresponding agreement protocol. It turns out that balanced digraphs play an important role in addressing average-consensus problems. We intro- duce disagreement functions that play the role of Lyapunov functions in convergence analysis of consensus protocols. A distinctive feature of this work is to address consen- sus problems for networks with directed information ∞ow. We provide analytical tools that rely on algebraic graph theory, matrix theory, and control theory. Simulations are provided that demonstrate the efiectiveness of our theoretical results.
The M-Machine multicomputer The M-Machine is an experimental multicomputer being developed to test architectural concepts motivated by the constraints of modern semiconductor technology and the demands of programming systems. The M-Machine computing nodes are connected with a 3-D mesh network; each node is a multithreaded processor incorporating 9 function units, on-chip cache, and local memory. The multiple function units are used to exploit both instruction-level and thread-level parallelism. A user accessible message passing system yields fast communication and synchronization between nodes. Rapid access to remote memory is provided transparently to the user with a combination of hardware and software mechanisms. This paper presents the architecture of the M-Machine and describes how its mechanisms attempt to maximize both single thread performance and overall system throughput. The architecture is complete and the MAP chip, which will serve as the M-Machine processing node, is currently being implemented.
Controlling the cost of reliability in peer-to-peer overlays Structured peer-to-peer overlay networks provide a useful substrate for building distributed applications but there are general concerns over the cost of maintaining these overlays. The current approach is to configure the overlays statically and conservatively to achieve the desired reliability even under uncommon adverse conditions. This results in high cost in the common case, or poor reliability in worse than expected conditions. We analyze the cost of overlay maintenance in realistic dynamic environments and design novel techniques to reduce this cost by adapting to the operating conditions. With our techniques, the concerns over the overlay maintenance cost are no longer warranted. Simulations using real traces show that they enable high reliability and performance even in very adverse conditions with low maintenance cost.
Chameleon: a dual-mode 802.11b/Bluetooth receiver system design In this paper, an approach to map the Bluetooth and 802.11b standards specifications into an architecture and specifications for the building blocks of a dual-mode direct conversion receiver is proposed. The design procedure focuses on optimizing the performance in each operating mode while attaining an efficient dual-standard solution. The impact of the expected receiver nonidealities and the characteristics of each building block are evaluated through bit-error-rate simulations. The proposed receiver design is verified through a fully integrated implementation from low-noise amplifier to analog-to-digital converter using IBM 0.25-μm BiCMOS technology. Experimental results from the integrated prototype meet the specifications from both standards and are in good agreement with the target sensitivity.
Optimum insertion/deletion point selection for fractional sample rate conversion In this paper, an optimum insertion/deletion point selection algorithm for fractional sample rate conversion (SRC) is proposed. The direct insertion/deletion technique achieves low complexity and low power consumption as compared to the other fractional SRC methods. Using a multiple set insertion/deletion technique is efficient for reduction of distortion caused by the insertion/deletion step. When the conversion factor is (N ±¿)/N, the number of possible patterns of insertion/deletion points and the number of combinations for multiple set inserters/deleters grow as ¿ increases. The proposed algorithm minimizes the distortion due to SRC by selecting the patterns and the combinations.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.039387
0.028571
0.028571
0.028571
0.019048
0.012283
0.007397
0.000748
0.000024
0
0
0
0
0
PathSeeker: A Fast Mapping Algorithm for CGRAs Coarse-grained reconfigurable arrays (CGRAs) have gained traction over the years as a low-power accelerator due to the efficient mapping of the compute-intensive loops onto the 2-D array by the CGRA compiler. When encountering a mapping failure for a given node, existing mapping techniques either exit and retry the mapping anew, or perform backtracking, i.e., recursively remove the previously mapped node to find a valid mapping. Abandoning mapping and starting afresh can deteriorate the quality of mapping and the compilation time. Even backtracking may not be the best choice since the previous node may not be the incorrectly placed node. To tackle this issue, we propose PathSeeker - a mapping approach that analyzes mapping failures and performs local adjustments to the schedule to obtain a mapping. Experimental results on 35 top performance-critical loops from MiBench, Rodinia, and Parboil benchmark suites demonstrate that PathSeeker can map all of them with better mapping quality and dramatically less compilation time than the previous state-of-the-art approaches - GraphMinor and RAMP, which were unable to map 20 and 5 loops, respectively. Over these benchmarks, PathSeeker achieves 28% better performance at 550x compilation speedup over GraphMinor and 3% better performance at 10x compilation speedup over RAMP on a 4x4 CGRA.
Compiler algorithms for synchronization Translating program loops into a parallel form is one of the most important transformations performed by concurrentizing compilers. This transformation often requires the insertion of synchronization instructions within the body of the concurrent loop. Several loop synchronization techniques are presented first. Compiler algorithms to generate synchronization instructions for singly-nested loops are then discussed. Finally, a technique for the elimination of redundant synchronization instructions is presented.
A Software Scheme for Multithreading on CGRAs Recent industry trends show a drastic rise in the use of hand-held embedded devices, from everyday applications to medical (e.g., monitoring devices) and critical defense applications (e.g., sensor nodes). The two key requirements in the design of such devices are their processing capabilities and battery life. There is therefore an urgency to build high-performance and power-efficient embedded devices, inspiring researchers to develop novel system designs for the same. The use of a coprocessor (application-specific hardware) to offload power-hungry computations is gaining favor among system designers to suit their power budgets. We propose the use of CGRAs (Coarse-Grained Reconfigurable Arrays) as a power-efficient coprocessor. Though CGRAs have been widely used for streaming applications, the extensive compiler support required limits its applicability and use as a general purpose coprocessor. In addition, a CGRA structure can efficiently execute only one statically scheduled kernel at a time, which is a serious limitation when used as an accelerator to a multithreaded or multitasking processor. In this work, we envision a multithreaded CGRA where multiple schedules (or kernels) can be executed simultaneously on the CGRA (as a coprocessor). We propose a comprehensive software scheme that transforms the traditionally single-threaded CGRA into a multithreaded coprocessor to be used as a power-efficient accelerator for multithreaded embedded processors. Our software scheme includes (1) a compiler framework that integrates with existing CGRA mapping techniques to prepare kernels for execution on the multithreaded CGRA and (2) a runtime mechanism that dynamically schedules multiple kernels (offloaded from the processor) to execute simultaneously on the CGRA coprocessor. Our multithreaded CGRA coprocessor implementation thus makes it possible to achieve improved power-efficient computing in modern multithreaded embedded systems.
Domain Specialization Is Generally Unnecessary for Accelerators. Domain-specific accelerators (DSAs), which sacrifice programmability for efficiency, are a reaction to the waning benefits of device scaling. This article demonstrates that there are commonalities between DSAs that can be exploited with programmable mechanisms. The goals are to create a programmable architecture that can match the benefits of a DSA and to create a platform for future accelerator i...
Chasing Carbon: The Elusive Environmental Footprint of Computing Given recent algorithm, software, and hardware innovation, computing has enabled a plethora of new applications. As computing becomes increasingly ubiquitous, however, so does its environmental impact. This article brings the issue to the attention of computer-systems researchers. Our analysis, built on industry-reported characterization, quantifies the environmental effects of computing in terms of carbon emissions. Broadly, carbon emissions have two sources: operational energy consumption, and hardware manufacturing and infrastructure. Although carbon emissions from the former are decreasing, thanks to algorithmic, software, and hardware innovations that boost performance and power efficiency, the overall carbon footprint of computer systems continues to grow. This work quantifies the carbon output of computer systems to show that most emissions related to modern mobile and data-center equipment come from hardware manufacturing and infrastructure. We, therefore, outline future directions for minimizing the environmental impact of computing systems.
OpenCGRA: An Open-Source Unified Framework for Modeling, Testing, and Evaluating CGRAs Coarse-grained reconfigurable arrays (CGRAs), loosely defined as arrays of functional units (e.g., adder, subtractor, multiplier, divider, or larger multi-operation units, but smaller than a general-purpose core) interconnected through a Network-on-Chip, provide higher flexibility than domain-specific ASIC accelerators while offering increased hardware efficiency with respect to fine-grained reconfigurable devices, such as Field Programmable Gate Arrays (FPGAs). The fast evolving fields of machine learning and edge computing, which are seeing a continuous flow of novel algorithms and larger models, make CGRAs ideal architectures to allow domain specialization without losing too much generality. Designing and generating a CGRA, however, still requires to define the type and number of the specific functional units, implement their interconnect and the network topology, and perform the simulation and validation, given a variety of workloads of interest. In this paper, we propose OpenC-GRA *, the first open-source integrated framework that is able to support the full top-to-bottom design flow for specializing and implementing CGRAs: modeling at different abstraction levels (functional level, cycle level, register-transfer level) with compiler support, verification at different granularities (unit testing, integration testing, property-based testing), simulation, generation of synthesizable Verilog, and characterization (area, power, and timing). By using OpenCGRA, it only takes a few hours to build a specialized power- and area-efficient CGRA throughout the entire design flow given a set of applications of interest. OpenCGRA is available online at https://github.com/pnnl/OpenCGRA.
A Fully Pipelined and Dynamically Composable Architecture of CGRA. Future processor chips will not be limited by the transistor resources, but will be mainly constrained by energy efficiency. Reconfigurable fabrics bring higher energy efficiency than CPUs via customized hardware that adapts to user applications. Among different reconfigurable fabrics, coarse-grained reconfigurable arrays (CGRAs) can be even more efficient than fine-grained FPGAs when bit-level customization is not necessary in target applications. CGRAs were originally developed in the era when transistor resources were more critical than energy efficiency. Previous work shares hardware among different operations via modulo scheduling and time multiplexing of processing elements. In this work, we focus on an emerging scenario where transistor resources are rich. We develop a novel CGRA architecture that enables full pipelining and dynamic composition to improve energy efficiency by taking full advantage of abundant transistors. Several new design challenges are solved. We implement a prototype of the proposed architecture in a commodity FPGA chip for verification. Experiments show that our architecture can fully exploit the energy benefits of customization for user applications in the scenario of rich transistor resources.
Randomized algorithms This text by two well-known experts in the field presents the basic concepts in the design and analysis of randomized algorithms at a level accessible to beginning graduate students, professionals and researchers.
The gem5 simulator The gem5 simulation infrastructure is the merger of the best aspects of the M5 [4] and GEMS [9] simulators. M5 provides a highly configurable simulation framework, multiple ISAs, and diverse CPU models. GEMS complements these features with a detailed and exible memory system, including support for multiple cache coherence protocols and interconnect models. Currently, gem5 supports most commercial ISAs (ARM, ALPHA, MIPS, Power, SPARC, and x86), including booting Linux on three of them (ARM, ALPHA, and x86). The project is the result of the combined efforts of many academic and industrial institutions, including AMD, ARM, HP, MIPS, Princeton, MIT, and the Universities of Michigan, Texas, and Wisconsin. Over the past ten years, M5 and GEMS have been used in hundreds of publications and have been downloaded tens of thousands of times. The high level of collaboration on the gem5 project, combined with the previous success of the component parts and a liberal BSD-like license, make gem5 a valuable full-system simulation tool.
PRESENT: An Ultra-Lightweight Block Cipher With the establishment of the AES the need for new block ciphers has been greatly diminished; for almost all block cipher applications the AES is an excellent and preferred choice. However, despite recent implementation advances, the AES is not suitable for extremely constrained environments such as RFID tags and sensor networks. In this paper we describe an ultra-lightweight block cipher, present. Both security and hardware efficiency have been equally important during the design of the cipher and at 1570 GE, the hardware requirements for presentare competitive with today's leading compact stream ciphers.
The Quadrature LC Oscillator: A Complete Portrait Based on Injection Locking We show that the quadrature LC oscillator is best treated as two strongly coupled, nominally identical oscillators that are locked to the same frequency. Differential equations that extend Adler&#39;s description of locking to strong injection reveal the full dynamics of this circuit. With a simplifying insight, the analysis reveals all the modes of the oscillator, their stability, the effects of mism...
Architectural Evolution of Integrated M-Phase High-Q Bandpass Filters -phase bandpass filters (BPFs) are analyzed, and variations of the structure are proposed. For values of that are integer multiples of 4, the conventional -phase BPF structure is modified to take complex baseband impedances and frequency-translate their complex impedance response to the local oscillator frequency. Also, it is demonstrated how the -phase BPF can be modified to implement a high quality factor (Q) image-rejection BPF with quadrature RF inputs. In addition, we present high-Q BPFs whose center frequencies are equal to the sum or difference of the RF and IF (intermediate frequency) clocks. Such filters can be useful in heterodyne receiver architectures.
Quadrature Bandpass Sampling Rules for Single- and Multiband Communications and Satellite Navigation Receivers In this paper, we examine how existing rules for bandpass sampling rates can be applied to quadrature bandpass sampling. We find that there are significantly more allowable sampling rates and that the minimum rate can be reduced.
A Sub- $\mu$ W Reconfigurable Front-End for Invasive Neural Recording That Exploits the Spectral Characteristics of the Wideband Neural Signal This paper presents a sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> ac-coupled reconfigurable front-end for invasive wideband neural signal recording. The proposed topology embeds filtering capabilities enabling the selection of different frequency bands inside the neural signal spectrum. Power consumption is optimized by defining specific noise targets for each sub-band. These targets take into account the spectral characteristics of wideband neural signals: local field potentials (LFP) exhibit <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mathrm {1/f^{x}}$ </tex-math></inline-formula> magnitude scaling while action potentials (AP) show uniform magnitude across frequency. Additionally, noise targets also consider electrode noise and the spectral distribution of noise sources in the circuit. An experimentally verified prototype designed in a standard 180 nm CMOS process draws 815 nW from a 1 V supply. The front-end is able to select among four different frequency bands (modes) up to 5 kHz. The measured input-referred spot-noise at 500 Hz in the LFP mode (1 Hz - 700 Hz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$55~nV/\sqrt {Hz}$ </tex-math></inline-formula> while the integrated noise in the AP mode (200 Hz - 5 kHz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$4.1~\mu Vrms$ </tex-math></inline-formula> . The proposed front-end achieves sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> operation without penalizing other specifications such as input swing, common-mode or power-supply rejection ratios. It reduces the power consumption of neural front-ends with spectral selectivity by <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$6.1\times $ </tex-math></inline-formula> and, compared with conventional wideband front-ends, it obtains a reduction of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.5\times $ </tex-math></inline-formula> .
1.2
0.2
0.2
0.2
0.2
0.1
0.05
0
0
0
0
0
0
0