Query Text
stringlengths
10
40.4k
Ranking 1
stringlengths
12
40.4k
Ranking 2
stringlengths
12
36.2k
Ranking 3
stringlengths
10
36.2k
Ranking 4
stringlengths
13
40.4k
Ranking 5
stringlengths
12
36.2k
Ranking 6
stringlengths
13
36.2k
Ranking 7
stringlengths
10
40.4k
Ranking 8
stringlengths
12
36.2k
Ranking 9
stringlengths
12
36.2k
Ranking 10
stringlengths
12
36.2k
Ranking 11
stringlengths
20
6.21k
Ranking 12
stringlengths
14
8.24k
Ranking 13
stringlengths
28
4.03k
score_0
float64
1
1.25
score_1
float64
0
0.25
score_2
float64
0
0.25
score_3
float64
0
0.25
score_4
float64
0
0.25
score_5
float64
0
0.25
score_6
float64
0
0.25
score_7
float64
0
0.24
score_8
float64
0
0.2
score_9
float64
0
0.03
score_10
float64
0
0
score_11
float64
0
0
score_12
float64
0
0
score_13
float64
0
0
Event-Driven Data Acquisition and Digital Signal Processing—A Tutorial Event-driven analog-to-digital conversion and associated digital signal processing techniques are reviewed. Such techniques, still in the research stage, have the potential to significantly reduce the consumption of energy and bandwidth resources in several important applications.
A Level-Crossing Based QRS-Detection Algorithm for Wearable ECG Sensors In this paper, an asynchronous analog-to-information conversion system is introduced for measuring the RR intervals of the electrocardiogram (ECG) signals. The system contains a modified level-crossing analog-to-digital converter and a novel algorithm for detecting the R-peaks from the level-crossing sampled data in a compressed volume of data. Simulated with MIT-BIH Arrhythmia Database, the proposed system delivers an average detection accuracy of 98.3%, a sensitivity of 98.89%, and a positive prediction of 99.4%. Synthesized in 0.13 μm CMOS technology with a 1.2 V supply voltage, the overall system consumes 622 nW with core area of 0.136 mm (2), which make it suitable for wearable wireless ECG sensors in body-sensor networks.
Hardware-Algorithms Co-Design and Implementation of an Analog-to-Information Converter for Biosignals Based on Compressed Sensing. We report the design and implementation of an Analog-to-Information Converter (AIC) based on Compressed Sensing (CS). The system is realized in a CMOS 180 nm technology and targets the acquisition of bio-signals with Nyquist frequency up to 100 kHz. To maximize performance and reduce hardware complexity, we co-design hardware together with acquisition and reconstruction algorithms. The resulting A...
Continuous Time Level Crossing Sampling ADC for Bio-Potential Recording Systems In this paper we present a fixed window level crossing sampling analog to digital convertor for bio-potential recording sensors. This is the first proposed and fully implemented fixed window level crossing ADC without local DACs and clocks. The circuit is designed to reduce data size, power, and silicon area in future wireless neurophysiological sensor systems. We built a testing system to measure bio-potential signals and used it to evaluate the performance of the circuit. The bio-potential amplifier offers a gain of 53 dB within a bandwidth of 200 Hz-20 kHz. The input-referred rms noise is 2.8 µV. In the asynchronous level crossing ADC, the minimum delta resolution is 4 mV. The input signal frequency of the ADC is up to 5 kHz. The system was fabricated using the AMI 0.5 µm CMOS process. The chip size is 1.5 mm by 1.5 mm. The power consumption of the 4-channel system from a 3.3 V supply is 118.8 µW in the static state and 501.6 µW with a 240 kS/s sampling rate. The conversion efficiency is 1.6 nJ/conversion.
Event-Driven GHz-Range Continuous-Time Digital Signal Processor With Activity-Dependent Power Dissipation Presented is a clockless, continuous-time (CT) GHz processor that bypasses some of the limitations of conventional digital and analog implementations. Per-edge digital signal encoding is used for parallel processing of continuous-time samples with a temporal spacing as narrow as 15 ps, generated by a 3-b CT flash ADC. Parallel digital delay chains and programmable charge pumps realize the asynchronous filtering operation, each consuming negligible power while awaiting a new sample. A six-tap CT ADC and CT digital FIR processor system occupies 0.07 mm2 and achieves dynamic range of over 20 dB in the 0.8-3.2-GHz signal range. The system's rate of operations automatically adapts to the signal, thus causing its power dissipation to vary in the range of 1.1 to 10 mW according to input activity.
MEMS-based RF channel selection for true software-defined cognitive radio and low-power sensor communications An evaluation of the potential for MEMS technologies to realize the RF front-end frequency gating spectrum analyzer function needed by true software-defined cognitive radios and ultra-low-power autonomous sensor network radios is presented. Here, RF channel selection, as opposed to band selection that removes all interferers, even those in band, and passes only the desired channel, is key to substantial potential increases in call volume with simultaneous reductions in power consumption. The relevant MEMS technologies most conducive to RF channel- selecting front-ends include vibrating micromechanical resonators that exhibit record on-chip Qs at gigahertz frequencies; resonant switches that provide extremely efficient switched-mode power gain for both transmit and receive paths; medium-scale integrated micromechanical circuits that implement on/off switchable filter-amplifier banks; and fabrication technologies that integrate MEMS together with foundry CMOS transistors in a fully monolithic low-capacitance single-chip process. The many issues that make realization of RF channel selection a truly challenging proposition include resonator drift stability, mechanical circuit complexity, repeatability and fabrication tolerances, and the need for resonators at gigahertz frequencies with simultaneous high Ω (>30,000) and low impedance (e.g., 50 W for conventional systems). Some perspective on which resonator technologies might best achieve these simultaneous attributes is provided.
Signal-dependent variable-resolution clockless A/D conversion with application to continuous-time digital signal processing A variable-resolution (VR) quantizer with input-activity-dependent adjustable resolution is presented. Several potential schemes are discussed; the favored scheme achieves adjustable resolution by level skipping according to the speed of the input. The advantages of a VR analog-to-digital conversion (ADC) are presented with applications in continuous-time (CT) digital signal processing systems. It is shown that a decrease in resolution for fast inputs does not corrupt the in-band spectrum while leading to a reduction in the number of samples produced by a CT ADC. The result is a significant decrease in power dissipation but without in-band performance degradation. Analysis and extensive simulations are provided. Simulations using signals in the voice band show that a power reduction of over 80% is achievable with a VR quantization.
A Resolution-Reconfigurable 5-to-10-Bit 0.4-to-1 V Power Scalable SAR ADC for Sensor Applications A power-scalable SAR ADC for sensor applications is presented. The ADC features a reconfigurable 5-to-10-bit DAC whose power scales exponentially with resolution. At low resolutions where noise and linearity requirements are reduced, supply voltage scaling is leveraged to further reduce the energy-per-conversion. The ADC operates up to 2 MS/s at 1 V and 5 kS/s at 0.4 V, and its power scales linearly with sample rate down to leakage levels of 53 nW at 1 V and 4 nW at 0.4 V. Leakage power-gating during a SLEEP mode in between conversions reduces total power by up to 14% at sample rates below 1 kS/s. Prototyped in a low-power 65 nm CMOS process, the ADC in 10-bit mode achieves an INL and DNL of 0.57 LSB and 0.58 LSB respectively at 0.6 V, and the Nyquist SNDR and SFDR are 55 dB and 69 dB respectively at 0.55 V and 20 kS/s. The ADC achieves an optimal FOM of 22.4 fJ/conversion-step at 0.55 V in 10-bit mode. The combined techniques of DAC resolution and voltage scaling maximize efficiency at low resolutions, resulting in an FOM that increases by only 7x over the 5-bit scaling range, improving upon a 32x degradation that would otherwise arise from truncation of bits from an ADC of fixed resolution and voltage.
A 90-nm CMOS Doherty power amplifier with minimum AM-PM distortion A linear Doherty amplifier is presented. The design reduces AM-PM distortion by optimizing the device-size ratio of the carrier and peak amplifiers to cancel each other's phase variation. Consequently, this design achieves both good linearity and high backed-off efficiency associated with the Doherty technique, making it suitable for systems with large peak-to-average power ratio (WLAN, WiMAX, etc.). The fully integrated design has on-chip quadrature hybrid coupler, impedance transformer, and output matching networks. The experimental 90-nm CMOS prototype operating at 3.65 GHz achieves 12.5% power-added efficiency (PAE) at 6 dB back-off, while exceeding IEEE 802.11a -25 dB error vector magnitude (EVM) linearity requirement (using 1.55-V supply). A 28.9 dBm maximum Psat is achieved with 39% PAE (using 1.85-V supply). The active die area is 1.2 mm2.
Random Forests Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R. Schapire, Machine Learning: Proceedings of the Thirteenth International conference, ***, 148–156), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression.
Searching for black-hole faults in a network using multiple agents We consider a fixed communication network where (software) agents can move freely from node to node along the edges. A black hole is a faulty or malicious node in the network such that if an agent enters this node, then it immediately “dies.” We are interested in designing an efficient communication algorithm for the agents to identify all black holes. We assume that we have k agents starting from the same node s and knowing the topology of the whole network. The agents move through the network in synchronous steps and can communicate only when they meet in a node. At the end of the exploration of the network, at least one agent must survive and must know the exact locations of the black holes. If the network has n nodes and b black holes, then any exploration algorithm needs Ω(n/k + Db) steps in the worst-case, where Db is the worst case diameter of the network with at most b nodes deleted. We give a general algorithm which completes exploration in O((n/k)logn/loglogn + bDb) steps for arbitrary networks, if b≤k/2. In the case when b≤k/2, and , we give a refined algorithm which completes exploration in asymptotically optimal O(n/k) steps.
Design Techniques for a 66 Gb/s 46 mW 3-Tap Decision Feedback Equalizer in 65 nm CMOS. This paper analyzes and describes design techniques enabling energy-efficient multi-tap decision feedback equalizers operated at or near the speed limits of the technology. We propose a closed-loop architecture utilizing three techniques to achieve this goal, namely a merged latch and summer, reduced latch gain, and a dynamic latch design. A 65 nm CMOS 3-tap implementation of the proposed architec...
Formal Analysis of Leader Election in MANETs Using Real-Time Maude.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.038008
0.023452
0.023452
0.018138
0.012387
0.008144
0.002212
0.000239
0.000003
0
0
0
0
0
Chipyard: Integrated Design, Simulation, and Implementation Framework for Custom SoCs Continued improvement in computing efficiency requires functional specialization of hardware designs. Agile hardware design methodologies have been proposed to alleviate the increased design costs of custom silicon architectures, but their practice thus far has been accompanied with challenges in integration and validation of complex systems-on-a-chip (SoCs). We present the Chipyard framework, an integrated SoC design, simulation, and implementation environment for specialized compute systems. Chipyard includes configurable, composable, open-source, generator-based IP blocks that can be used across multiple stages of the hardware development flow while maintaining design intent and integration consistency. Through cloud-hosted FPGA accelerated simulation and rapid ASIC implementation, Chipyard enables continuous validation of physically realizable customized systems.
CHIPKIT: An agile, reusable open-source framework for rapid test chip development The current trend for domain-specific architectures has led to renewed interest in research test chips to demonstrate new specialized hardware. Tapeouts also offer huge pedagogical value garnered from real hands-on exposure to the whole system stack. However, success with tapeouts requires hard-earned experience, and the design process is time consuming and fraught with challenges. Therefore, custom chips have remained the preserve of a small number of research groups, typically focused on circuit design research. This article describes the CHIPKIT framework: a reusable SoC subsystem which provides basic IO, an on-chip programmable host, off-chip hosting, memory, and peripherals. This subsystem can be readily extended with new IP blocks to generate custom test chips. Central to CHIPKIT is an agile RTL development flow, including a code generation tool called VGEN. Finally, we discuss best practices for full-chip validation across the entire design cycle.
Threaded code The concept of “threaded code” is presented as an alternative to machine language code. Hardware and software realizations of it are given. In software it is realized as interpretive code not needing an interpreter. Extensions and optimizations are mentioned.
Formal verification in hardware design: a survey In recent years, formal methods have emerged as an alternative approach to ensuring the quality and correctness of hardware designs, overcoming some of the limitations of traditional validation techniques such as simulation and testing.There are two main aspects to the application of formal methods in a design process: the formal framework used to specify desired properties of a design and the verification techniques and tools used to reason about the relationship between a specification and a corresponding implementation. We survey a variety of frameworks and techniques proposed in the literature and applied to actual designs. The specification frameworks we describe include temporal logics, predicate logic, abstraction and refinement, as well as containment between &ohgr;-regular languages. The verification techniques presented include model checking, automata-theoretic techniques, automated theorem proving, and approaches that integrate the above methods.In order to provide insight into the scope and limitations of currently available techniques, we present a selection of case studies where formal methods were applied to industrial-scale designs, such as microprocessors, floating-point hardware, protocols, memory subsystems, and communications hardware.
The Oracle Problem in Software Testing: A Survey Testing involves examining the behaviour of a system in order to discover potential faults. Given an input for a system, the challenge of distinguishing the corresponding desired, correct behaviour from potentially incorrect behavior is called the “test oracle problem”. Test oracle automation is important to remove a current bottleneck that inhibits greater overall test automation. Without test or...
BROOM: An Open-Source Out-of-Order Processor With Resilient Low-Voltage Operation in 28-nm CMOS The Berkeley resilient out-of-order machine (BROOM) is a resilient, wide-voltage-range implementation of an open-source out-of-order (OoO) RISC-V processor implemented in an ASIC flow. A 28-nm test-chip contains a BOOM OoO core and a 1-MiB level-2 (L2) cache, enhanced with architectural error tolerance for low-voltage operation. It was implemented by using an agile design methodology, where the initial OoO architecture was transformed to perform well in a high-performance, low-leakage CMOS process, informed by synthesis, place, and route data by using foundry-provided standard-cell library and memory compiler. The two-person-team productivity was improved in part thanks to a number of open-source artifacts: The Chisel hardware construction language, the RISC-V instruction set architecture, the rocket-chip SoC generator, and the open-source BOOM core. The resulting chip, taped out using TSMC’s 28-nm HPM process, runs at 1.0 GHz at 0.9 V, and is able to operate down to 0.47 V.
A Case for Accelerating Software RTL Simulation RTL simulation is a critical tool for hardware design but its current slow speed often bottlenecks the whole design process. Simulation speed becomes even more crucial for agile and open-source hardware design methodologies, because the designers not only want to iterate on designs quicker, but they may also have less resources with which to simulate them. In this article, we execute multiple simulators and analyze them with hardware performance counters. We find some open-source simulators not only outperform a leading commercial simulator, they also achieve comparable or higher instruction throughput on the host processor. Although advanced optimizations may increase the complexity of the simulator, they do not significantly hinder instruction throughput. Our findings make the case that there is significant room to accelerate software simulation and open-source simulators are a great starting point for researchers.
A Hybrid Systolic-Dataflow Architecture for Inductive Matrix Algorithms Dense linear algebra kernels are critical for wireless, and the oncoming proliferation of 5G only amplifies their importance. Due to the inductive nature of many such algorithms, parallelism is difficult to exploit: parallel regions have fine-grain producer/consumer interaction with iteratively changing depen-dence distance, reuse rate, and memory access patterns. This makes multi-threading impractical due to fine-grain synchronization, and vectorization ineffective due to the non-rectangular iteration domain. CPUs, DSPs, and GPUs perform order-of-magnitude below peak. Our insight is that if the nature of inductive dependences and memory accesses were explicit in the hardware/software interface, then a spatial architecture could efficiently execute parallel code regions. To this end, we first develop a novel execution model, inductive dataflow, where inductive dependence patterns and memory access patterns (streams) are first-order primitives. Second, we develop a hybrid spatial architecture combining systolic and tagged dataflow execution to attain high utilization at low energy and area cost. Finally, we create a scalable design through a novel vector-stream control model which amortizes control overhead both in time and spatially across architecture lanes. We evaluate our design, REVEL, with a full stack (compiler, ISA, simulator, RTL). Across a suite of linear algebra kernels, REVEL outperforms equally-provisioned DSPs by 4.6×-37×. Compared to state-of-the-art spatial architectures, REVEL is mean 3× faster. Compared to a set of ASICs, REVEL is only 2× the power and half the area.
Unreliable failure detectors for reliable distributed systems We introduce the concept of unreliable failure detectors and study how they can be used to solve Consensus in asynchronous systems with crash failures. We characterise unreliable failure detectors in terms of two properties—completeness and accuracy. We show that Consensus can be solved even with unreliable failure detectors that make an infinite number of mistakes, and determine which ones can be used to solve Consensus despite any number of crashes, and which ones require a majority of correct processes. We prove that Consensus and Atomic Broadcast are reducible to each other in asynchronous systems with crash failures; thus, the above results also apply to Atomic Broadcast. A companion paper shows that one of the failure detectors introduced here is the weakest failure detector for solving Consensus [Chandra et al. 1992].
Scalable video coding and transport over broadband wireless networks With the emergence of broadband wireless networks and increasing demand of multimedia information on the Internet, wireless multimedia services are foreseen to become widely deployed in the next decade. Real-time video transmission typically has requirements on quality of service (QoS). However, wireless channels are unreliable and the channel bandwidth varies with time, which may cause severe deg...
Enhancing peer-to-peer content discovery techniques over mobile ad hoc networks Content dissemination over mobile ad hoc networks (MANETs) is usually performed using peer-to-peer (P2P) networks due to its increased resiliency and efficiency when compared to client-server approaches. P2P networks are usually divided into two types, structured and unstructured, based on their content discovery strategy. Unstructured networks use controlled flooding, while structured networks use distributed indexes. This article evaluates the performance of these two approaches over MANETs and proposes modifications to improve their performance. Results show that unstructured protocols are extremely resilient, however they are not scalable and present high energy consumption and delay. Structured protocols are more energy-efficient, however they have a poor performance in dynamic environments due to the frequent loss of query messages. Based on those observations, we employ selective forwarding to decrease the bandwidth consumption in unstructured networks, and introduce redundant query messages in structured P2P networks to increase their success ratio.
A 40 Gb/s CMOS Serial-Link Receiver With Adaptive Equalization and Clock/Data Recovery This paper presents a 40 Gb/s serial-link receiver including an adaptive equalizer and a CDR circuit. A parallel-path equalizing filter is used to compensate the high-frequency loss in copper cables. The adaptation is performed by only varying the gain in the high-pass path, which allows a single loop for proper control and completely removes the RC filters used for separately extracting the high-...
VirFID: A Virtual Force (VF)-based Interest-Driven moving phenomenon monitoring scheme using multiple mobile sensor nodes. In this paper, we study mobile sensor network (MSN) architectures and algorithms for monitoring a moving phenomenon in an unknown and open area using a group of autonomous mobile sensor (MS) nodes. Monitoring a moving phenomenon involves challenges due to limited communication/sensing ranges of MS nodes, the phenomenon’s unpredictable changes in distribution and position, and the lack of information on the sensing area. To address the challenges and meet the objective of the maximization of weighted sensing coverage, we propose a novel scheme, namely VirFID (Virtual Force (VF)-based Interest-Driven moving phenomenon monitoring). In VirFID, MS nodes move toward the positions where more interesting sensing data can be obtained by utilizing the virtual force, which is calculated based on the distance between MS nodes and sensed values in the area of interest. MS nodes also perform network-wise information sharing to increase the weighted sensing coverage. Depending on the level of information used, three variants of VirFID are evaluated: VirFID-LIB (Local Information-Based), VirFID-GHL (Global Highest and Lowest), and VirFID-IBN (Interests at Boundary Nodes). In addition, an analytical model for estimating MSN speed is designed. Simulations are performed to compare the performance of three VirFID variants with other approaches. Our simulation results show that VirFID algorithms outperform other schemes in terms of the weighted coverage efficiency, and VirFID-IBN achieves the highest weighted coverage efficiency among VirFID variants.
A 0.5 V 10-bit 3 MS/s SAR ADC With Adaptive-Reset Switching Scheme and Near-Threshold Voltage-Optimized Design Technique This brief presents a 10-bit ultra-low power energy-efficient successive approximation register (SAR) analog-to-digital converter (ADC). A new adaptive-reset switching scheme is proposed to reduce the switching energy of the capacitive digital-to-analog converter (CDAC). The proposed adaptive-reset switching scheme reduces the average switching energy of the CDAC by 90% compared to the conventional scheme without the common-mode voltage variation. In addition, the near-threshold voltage (NTV)-optimized digital library is adopted to alleviate the performance degradation in the ultra-low supply voltage while simultaneously increasing the energy efficiency. The NTV-optimized design technique is also introduced to the bootstrapped switch design to improve the linearity of the sample-and-hold circuit. The test chip is fabricated in a 65 nm CMOS, and its core area is 0.022 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> . At a supply of 0.5 V and sampling speed of 3 MS/s, the SAR ADC achieves an ENOB of 8.78 bit and consumes <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$3.09~{\boldsymbol{\mu }}\text{W}$ </tex-math></inline-formula> . The resultant Walden figure-of-merit (FoM) is 2.34 fJ/conv.-step.
1.11
0.11
0.1
0.1
0.1
0.1
0.1
0.003333
0
0
0
0
0
0
A Class-G Switched-Capacitor RF Power Amplifier A switched-capacitor power amplifier (SCPA) that realizes an envelope elimination and restoration/polar class-G topology is introduced. A novel voltage-tolerant switch enables the use of two power supply voltages which increases efficiency and output power simultaneously. Envelope digital-to-analog conversion in the polar transmitter is achieved using an SC RF DAC that exhibits high efficiency at typical output power backoff levels. In addition, high linearity is achieved and no digital predistortion is required. Implemented in 65 nm CMOS, the measured peak output power and power-added efficiency (PAE) are 24.3 dBm and 43.5%, respectively, whereas when amplifying 802.11g 64-QAM OFDM signals, the average output power and PAE are 16.8 dBm and 33%, respectively. The measured EVM is 2.9%.
A Multiphase Switched Capacitor Power Amplifier. This paper presents an all-digital multiphase switched capacitor power amplifier (MP-SCPA) implemented in a 130-nm CMOS. Quadrature architectures suffer reduced output power and efficiency owing to the combination of out-ofphase signals. The MP architecture reduces the phase difference between the basis vectors that are combined, and hence the output power and efficiency are greatly improved. Sixt...
Split-Array, C-2C Switched-Capacitor Power Amplifiers. This paper presents a 13-b C-2C split-array (SA) multiphase switched-capacitor power amplifier (SAMP-SCPA) implemented in 65-nm CMOS. The SAMP-SCPA was designed for 16-b resolution to offer extra states for linearization/calibration using digital pre-distortion (DPD). Resolution limits for SA SCPAs are presented. The SAMP-SCPA allows for the improvement of the SCPA resolution while minimizing the ...
A Compact Dual-Band Digital Polar Doherty Power Amplifier Using Parallel-Combining Transformer This paper presents a dual-band high-power digital polar Doherty power amplifier (PA). A current-mode parallel-combining transformer (PCT) power combiner is introduced for Doherty operation, which realizes simultaneous dual-band coverage, high output power, and backoff efficiency enhancement with an ultra-compact single-transformer footprint. The PA is implemented in 55-nm CMOS and occupies 1.11 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> die area. It achieves 28.9-dBm peak <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$P_{\mathrm {out}}$ </tex-math></inline-formula> with 36.8% PAE in low band (LB) and 27.0-dBm peak <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$P_{\mathrm {out}}$ </tex-math></inline-formula> with 25.4% PAE in high band (HB). For 12-subcarrier 180-kHz bandwidth cellular narrowband Internet-of-Things (NB-IoT) signals, the peak <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$P_{\mathrm {avg}}$ </tex-math></inline-formula> of 24.4 dBm with average PAE of 29.5% at 850 MHz and the peak <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$P_{\mathrm {avg}}$ </tex-math></inline-formula> of 23.0 dBm with average PAE of 17.9% at 1.7 GHz are obtained with −21.6-dB error vector magnitude (EVM). Moreover, the PA achieves 20.8-dBm <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$P_{\mathrm {avg}}$ </tex-math></inline-formula> , 22.7% average PAE, and −30.5-dB EVM for 20-MHz 256-QAM WLAN signal.
Wideband Mixed-Domain Multi-Tap Finite-Impulse Response Filtering of Out-of-Band Noise Floor in Watt-Class Digital Transmitters. A major drawback of digital transmitters (DTX) is the absence of a reconstruction filter after digital-to-analog conversion which causes the baseband quantization noise to get upconverted to radio frequency and amplified at the output of the transmitter. In high power transmitters, this upconverted noise can be so strong as to prevent their use in frequency-division duplexing systems due to receiv...
A fully-integrated efficient CMOS inverse Class-D power amplifier for digital polar transmitters We demonstrate a fully-integrated, high-efficiency inverse Class-D power amplifier in 65nm CMOS process. Such efficient switching amplifiers can form the core of digital polar transmitters. A comprehensive analytical framework has been developed to reveal the design trade-offs and enable efficiency maximization. Operating from a 1-V supply, the PA delivers 22dBm output power with a high efficiency of 44% without using any RF process options. The PA efficiency is comparable to that of state-of-the-art CMOS switching PAs, though it uses a much simpler output matching network. The PA has been integrated into a mixed-signal polar transmitter and meets the 802.11g (54Mbps 64QAM OFDM) spectral mask and EVM requirements with more than 18% average efficiency.
A Flip-Chip-Packaged 25.3 dBm Class-D Outphasing Power Amplifier in 32 nm CMOS for WLAN Application A 2.4 GHz outphasing power amplifier (PA) is implemented in a 32 nm CMOS process. An inverter-based class-D PA topology is utilized to obtain low output impedance and good linearity in the outphasing system. MOS switch non-idealities, such as finite on-resistance and finite rise and fall times are analyzed for their impact on outphasing linearity and efficiency. Outphasing combining is performed via a transformer configured to achieve reduced loss at power backoff. The fabricated class-D outphasing PA delivers 25.3 dBm peak CW power with 35% total system Power Added Efficiency (includes all drivers). Average OFDM power is 19.6 dBm with efficiency 21.8% when transmitting WiFi signals with no linearization required. The PA is packaged in a flip-chip BGA package. Good linearity performance (ACPR and EVM) demonstrates the applicability of inverter-based class-D amplifiers for outphasing configurations.
A fully digital multimode polar transmitter employing 17b RF DAC in 3G mode.
Distributed estimation and quantization An algorithm is developed for the design of a nonlinear, n-sensor, distributed estimation system subject to communication and computation constraints. The algorithm uses only bivariate probability distributions and yields locally optimal estimators that satisfy the required system constraints. It is shown that the algorithm is a generalization of the classical Lloyd-Max results
Initializing newly deployed ad hoc and sensor networks A newly deployed multi-hop radio network is unstructured and lacks a reliable and efficient communication scheme. In this paper, we take a step towards analyzing the problems existing during the initialization phase of ad hoc and sensor networks. Particularly, we model the network as a multi-hop quasi unit disk graph and allow nodes to wake up asynchronously at any time. Further, nodes do not feature a reliable collision detection mechanism, and they have only limited knowledge about the network topology. We show that even for this restricted model, a good clustering can be computed efficiently. Our algorithm efficiently computes an asymptotically optimal clustering. Based on this algorithm, we describe a protocol for quickly establishing synchronized sleep and listen schedule between nodes within a cluster. Additionally, we provide simulation results in a variety of settings.
Controlling the cost of reliability in peer-to-peer overlays Structured peer-to-peer overlay networks provide a useful substrate for building distributed applications but there are general concerns over the cost of maintaining these overlays. The current approach is to configure the overlays statically and conservatively to achieve the desired reliability even under uncommon adverse conditions. This results in high cost in the common case, or poor reliability in worse than expected conditions. We analyze the cost of overlay maintenance in realistic dynamic environments and design novel techniques to reduce this cost by adapting to the operating conditions. With our techniques, the concerns over the overlay maintenance cost are no longer warranted. Simulations using real traces show that they enable high reliability and performance even in very adverse conditions with low maintenance cost.
Communication-efficient failure detection and consensus in omission environments Failure detectors have been shown to be a very useful mechanism to solve the consensus problem in the crash failure model, for which a number of communication-efficient algorithms have been proposed. In this paper we deal with the definition, implementation and use of communication-efficient failure detectors in the general omission failure model, where processes can fail by crashing and by omitting messages when sending and/or receiving. We first define a new failure detector class for this model in terms of completeness and accuracy properties. Then we propose an algorithm that implements a failure detector of the proposed class in a communication-efficient way, in the sense that only a linear number of links are used to send messages forever. We also explain how the well-known consensus algorithm of Chandra and Toueg can be adapted in order to use the proposed failure detector.
A 10-Bit 800-MHz 19-mW CMOS ADC A pipelined ADC employs charge-steering op amps to relax the trade-offs among speed, noise, and power consumption. Applying full-rate nonlinearity and gain error calibration, a prototype realized in 65-nm CMOS technology achieves an SNDR of 52.2 dB at an input frequency of 399.2MHz and an FoM of 53 fJ/conversion-step.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1.034472
0.033333
0.033333
0.033333
0.033333
0.019137
0.005756
0.000145
0
0
0
0
0
0
Handoff management for mobile devices in hybrid wireless data networks. Today's wireless access networks consist of several tiers that overlap each other. Provisioning of real time undisrupted communication to mobile users, anywhere and anytime through these heterogeneous overlay networks, is a challenging task. We extend the end-to-end approach for the handoff management in hybrid wireless data network by designing a fully mobile-controlled handoff for mobile devices equipped with dual mode interfaces. By handoff, we mean switching the communication between interfaces connected to different subnets. This mobile-controlled hand-off scheme reduces the service disruption time during both horizontal and vertical handoffs and does not require any modification in the access networks. We exploit the IP diversity created by the dual interfaces in the overlapping area by simultaneously connecting to different subnets and networks. Power saving is achieved by activating both interfaces only during the handoff period. The performance evaluation of the handoff is carried out by a simple mathematical analysis. The analysis shows that with proper network engineering, exploiting the speed of mobile node and overlapping area between subnets can reduce service disruption and power consumption during handoff significantly. We believe that with more powerful network interfaces our proposal of dual interfaces can be realized.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Hardware acceleration of BWA-MEM genomic short read mapping for longer read lengths. We present our work on hardware accelerated genomics pipelines, using either FPGAs or GPUs to accelerate execution of BWA-MEM, a widely-used algorithm for genomic short read mapping. The mapping stage can take up to 40% of overall processing time for genomics pipelines. Our implementation offloads the Seed Extension function, one of the main BWA-MEM computational functions, onto an accelerator.
SWIFOLD: Smith-Waterman implementation on FPGA with OpenCL for long DNA sequences. The results suggest that SWIFOLD can be a serious contender for accelerating the SW alignment of DNA sequences of unrestricted size in an affordable way reaching on average 125 GCUPS and almost a peak of 270 GCUPS.
GSWABE: faster GPU-accelerated sequence alignment with optimal alignment retrieval for short DNA sequences In this paper, we present GSWABE, a graphics processing unit GPU-accelerated pairwise sequence alignment algorithm for a collection of short DNA sequences. This algorithm supports all-to-all pairwise global, semi-global and local alignment, and retrieves optimal alignments on Compute Unified Device Architecture CUDA-enabled GPUs. All of the three alignment types are based on dynamic programming and share almost the same computational pattern. Thus, we have investigated a general tile-based approach to facilitating fast alignment by deeply exploring the powerful compute capability of CUDA-enabled GPUs. The performance of GSWABE has been evaluated on a Kepler-based Tesla K40 GPU using a variety of short DNA sequence datasets. The results show that our algorithm can yield a performance of up to 59.1 billions cell updates per second GCUPS, 58.5 GCUPS and 50.3 GCUPS for global, semi-global and local alignment, respectively. Furthermore, on the same system GSWABE runs up to 156.0 times faster than the Streaming SIMD Extensions SSE-based SSW library and up to 102.4 times faster than the CUDA-based MSA-CUDA the first stage in terms of local alignment. Compared with the CUDA-based gpu-pairAlign, GSWABE demonstrates stable and consistent speedups with a maximum speedup of 11.2, 10.7, and 10.6 for global, semi-global, and local alignment, respectively. Copyright © 2014 John Wiley & Sons, Ltd.
Emerging Trends in Design and Applications of Memory-Based Computing and Content-Addressable Memories Content-addressable memory (CAM) and associative memory (AM) are types of storage structures that allow searching by content as opposed to searching by address. Such memory structures are used in diverse applications ranging from branch prediction in a processor to complex pattern recognition. In this paper, we review the emerging challenges and opportunities in implementing different varieties of...
FindeR: Accelerating FM-Index-Based Exact Pattern Matching in Genomic Sequences through ReRAM Technology Genomics is the critical key to enabling precision medicine, ensuring global food security and enforcing wildlife conservation. The massive genomic data produced by various genome sequencing technologies presents a significant challenge for genome analysis. Because of errors from sequencing machines and genetic variations, approximate pattern matching (APM) is a must for practical genome analysis. Recent work proposes FPGA, ASIC and even process-in-memory-based accelerators to boost the APM throughput by accelerating dynamic-programming-based algorithms (e.g., Smith-Waterman). However, existing accelerators lack the efficient hardware acceleration for the exact pattern matching (EPM) that is an even more critical and essential function widely used in almost every step of genome analysis including assembly, alignment, annotation and compression. State-of-the-art genome analysis adopts the FM-Index that augments the space-efficient BWT with additional data structures permitting fast EPM operations. But the FM-Index is notorious for poor spatial locality and massive random memory accesses. In this paper, we propose a ReRAM-based process-in-memory architecture, FindeR, to enhance the FM-Index EPM search throughput in genomic sequences. We build a reliable and energy-efficient Hamming distance unit to accelerate the computing kernel of FM-Index search using commodity ReRAM chips without introducing extra CMOS logic. We further architect a full-fledged FM-Index search pipeline and improve its search throughput by lightweight scheduling on the NVDIMM. We also create a system library for programmers to invoke FindeR to perform EPMs in genome analysis. Compared to state-of-the-art accelerators, FindeR improves the FM-Index search throughput by 83% ~ 30K× and throughput per Watt by 3.5×~42.5K×.
GateKeeper-GPU: Fast and Accurate Pre-Alignment Filtering in Short Read Mapping We introduce GateKeeper-GPU, a fast and accurate pre-alignment filter that efficiently reduces the need for expensive sequence alignment. GateKeeper-GPU improves the filtering accuracy of GateKeeper, and by exploiting the massive parallelism provided by GPU threads it concurrently examines numerous sequence pairs rapidly. GateKeeper-GPU is available at https://github.com/BilkentCompGen/GateKeeper-...
ONT-X: An FPGA Approach to Real-time Portable Genomic Analysis Oxford Nanopore Technologies (ONT) MinION is a pocket-sized portable DNA sequencer for on-the-field DNA sequencing. The ONT data analysis pipeline requires substantial amounts of computational resources for the basecalling and read alignment step in the pipeline. We study the compute and memory requirement of the basecalling step and present the detailed microarchitecture for accelerating it on FP...
PUMA: A Programmable Ultra-efficient Memristor-based Accelerator for Machine Learning Inference Memristor crossbars are circuits capable of performing analog matrix-vector multiplications, overcoming the fundamental energy efficiency limitations of digital logic. They have been shown to be effective in special-purpose accelerators for a limited set of neural network applications. We present the Programmable Ultra-efficient Memristor-based Accelerator (PUMA) which enhances memristor crossbars with general purpose execution units to enable the acceleration of a wide variety of Machine Learning (ML) inference workloads. PUMA's microarchitecture techniques exposed through a specialized Instruction Set Architecture (ISA) retain the efficiency of in-memory computing and analog circuitry, without compromising programmability. We also present the PUMA compiler which translates high-level code to PUMA ISA. The compiler partitions the computational graph and optimizes instruction scheduling and register allocation to generate code for large and complex workloads to run on thousands of spatial cores. We have developed a detailed architecture simulator that incorporates the functionality, timing, and power models of PUMA's components to evaluate performance and energy consumption. A PUMA accelerator running at 1 GHz can reach area and power efficiency of 577 GOPS/s/mm 2 and 837~GOPS/s/W, respectively. Our evaluation of diverse ML applications from image recognition, machine translation, and language modelling (5M-800M synapses) shows that PUMA achieves up to 2,446× energy and 66× latency improvement for inference compared to state-of-the-art GPUs. Compared to an application-specific memristor-based accelerator, PUMA incurs small energy overheads at similar inference latency and added programmability.
Why systolic architectures? First Page of the Article
The impact of DHT routing geometry on resilience and proximity The various proposed DHT routing algorithms embody several different underlying routing geometries. These geometries include hypercubes, rings, tree-like structures, and butterfly networks. In this paper we focus on how these basic geometric approaches affect the resilience and proximity properties of DHTs. One factor that distinguishes these geometries is the degree of flexibility they provide in the selection of neighbors and routes. Flexibility is an important factor in achieving good static resilience and effective proximity neighbor and route selection. Our basic finding is that, despite our initial preference for more complex geometries, the ring geometry allows the greatest flexibility, and hence achieves the best resilience and proximity performance.
A 9-bit, 14 μW and 0.06 mm 2 Pulse Position Modulation ADC in 90 nm Digital CMOS. This work presents a compact, low-power, time-based architecture for nanometer-scale CMOS analog-to-digital conversion. A pulse position modulation ADC architecture is proposed and a prototype 9 bit PPM ADC incorporating a two-step TDC scheme is presented as proof of concept. The 0.06 mm2 prototype is implemented in 90 nm CMOS and achieves 7.9 effective bits across the entire input bandwidth and d...
Nonlinear semidefinite programming: sensitivity, convergence, and an application in passive reduced-order modeling We consider the solution of nonlinear programs with nonlinear semidefiniteness constraints. The need for an efficient exploitation of the cone of positive semidefinite matrices makes the solution of such nonlinear semidefinite programs more complicated than the solution of standard nonlinear programs. This paper studies a sequential semidefinite programming (SSP) method, which is a generalization of the well-known sequential quadratic programming method for standard nonlinear programs. We present a sensitivity result for nonlinear semidefinite programs, and then based on this result, we give a self-contained proof of local quadratic convergence of the SSP method. We also describe a class of nonlinear semidefinite programs that arise in passive reduced-order modeling, and we report results of some numerical experiments with the SSP method applied to problems in that class.
A study of disturbance observers with unknown relative degree of the plant. Robust stability of the disturbance observer (DOB) control system is studied when the relative degree of the plant is not the same as that of the nominal model. The study reveals that the closed-loop system can easily become unstable with sufficiently fast Q-filter when the relative degree of the plant is not known. In a few cases of unknown relative degree, however, robust stability can be obtained, and we present a design guideline of the nominal model, as well as the Q-filter, for that purpose. Moreover, a universal design of DOB is given for a plant whose relative degree is uncertain but less than or equal to four.
OMNI: A Framework for Integrating Hardware and Software Optimizations for Sparse CNNs Convolution neural networks (CNNs) as one of today’s main flavor of deep learning techniques dominate in various image recognition tasks. As the model size of modern CNNs continues to grow, neural network compression techniques have been proposed to prune the redundant neurons and synapses. However, prior techniques disconnect the software neural networks compression and hardware acceleration, whi...
1.11
0.1
0.1
0.1
0.1
0.1
0.1
0.033333
0
0
0
0
0
0
NACHOS: Software-Driven Hardware-Assisted Memory Disambiguation for Accelerators Hardware accelerators have relied on the compiler to extract instruction parallelism but may waste significant energy in enforcing memory ordering and discovering memory parallelism. Accelerators tend to either serialize memory operations [43] or reuse power hungry load-store queues (LSQs) [8], [27]. Recent works [11], [15] use the compiler for scheduling but continue to rely on LSQs for memory disambiguation. NACHOS is a hardware assisted software-driven approach to memory disambiguation for accelerators. In NACHOS, the compiler classifies pairs of memory operations as NO alias (i.e., independent memory operations), MUST alias (i.e., ordering required), or MAY alias (i.e., compiler uncertain). We developed a compiler-only approach called NACHOS-SW that serializes memory operations both when the compiler is certain (MUST alias) and uncertain (MAY alias). Our study analyzes multiple stages of alias analysis on 135 acceleration regions extracted from SPEC2K, SPEC2k6, and PARSEC. NACHOS-SW is en- ergy efficient, but serialization limits performance; 18%-100% slowdown compared to an optimized LSQ. We then proposed NACHOS a low-overhead, scalable, hardware comparator assist that dynamically verifies MAY alias and executes independent memory operations in parallel. NACHOS is a pay-as-you-go approach where the compiler filters out memory operations to save dynamic energy, and the hardware dynamically checks to find MLP. NACHOS achieves performance comparable to an optimized LSQ; in fact, it improved performance in 6 benchmarks(6%-70%) by reducing load-to-use latency for cache hits. NACHOS imposes no energy overhead in 15 out of 27 benchmarks i.e., compiler accurately determines all memory dependencies; the average energy overhead is ≃6% of total (accelerator and L1 cache); in comparison, an optimized LSQ consumes 27% of total energy. NACHOS is released as free and open source software. Github: https://github.com/sfu-arch/ nachos.
Compiler algorithms for synchronization Translating program loops into a parallel form is one of the most important transformations performed by concurrentizing compilers. This transformation often requires the insertion of synchronization instructions within the body of the concurrent loop. Several loop synchronization techniques are presented first. Compiler algorithms to generate synchronization instructions for singly-nested loops are then discussed. Finally, a technique for the elimination of redundant synchronization instructions is presented.
A Software Scheme for Multithreading on CGRAs Recent industry trends show a drastic rise in the use of hand-held embedded devices, from everyday applications to medical (e.g., monitoring devices) and critical defense applications (e.g., sensor nodes). The two key requirements in the design of such devices are their processing capabilities and battery life. There is therefore an urgency to build high-performance and power-efficient embedded devices, inspiring researchers to develop novel system designs for the same. The use of a coprocessor (application-specific hardware) to offload power-hungry computations is gaining favor among system designers to suit their power budgets. We propose the use of CGRAs (Coarse-Grained Reconfigurable Arrays) as a power-efficient coprocessor. Though CGRAs have been widely used for streaming applications, the extensive compiler support required limits its applicability and use as a general purpose coprocessor. In addition, a CGRA structure can efficiently execute only one statically scheduled kernel at a time, which is a serious limitation when used as an accelerator to a multithreaded or multitasking processor. In this work, we envision a multithreaded CGRA where multiple schedules (or kernels) can be executed simultaneously on the CGRA (as a coprocessor). We propose a comprehensive software scheme that transforms the traditionally single-threaded CGRA into a multithreaded coprocessor to be used as a power-efficient accelerator for multithreaded embedded processors. Our software scheme includes (1) a compiler framework that integrates with existing CGRA mapping techniques to prepare kernels for execution on the multithreaded CGRA and (2) a runtime mechanism that dynamically schedules multiple kernels (offloaded from the processor) to execute simultaneously on the CGRA coprocessor. Our multithreaded CGRA coprocessor implementation thus makes it possible to achieve improved power-efficient computing in modern multithreaded embedded systems.
Domain Specialization Is Generally Unnecessary for Accelerators. Domain-specific accelerators (DSAs), which sacrifice programmability for efficiency, are a reaction to the waning benefits of device scaling. This article demonstrates that there are commonalities between DSAs that can be exploited with programmable mechanisms. The goals are to create a programmable architecture that can match the benefits of a DSA and to create a platform for future accelerator i...
PathSeeker: A Fast Mapping Algorithm for CGRAs Coarse-grained reconfigurable arrays (CGRAs) have gained traction over the years as a low-power accelerator due to the efficient mapping of the compute-intensive loops onto the 2-D array by the CGRA compiler. When encountering a mapping failure for a given node, existing mapping techniques either exit and retry the mapping anew, or perform backtracking, i.e., recursively remove the previously mapped node to find a valid mapping. Abandoning mapping and starting afresh can deteriorate the quality of mapping and the compilation time. Even backtracking may not be the best choice since the previous node may not be the incorrectly placed node. To tackle this issue, we propose PathSeeker - a mapping approach that analyzes mapping failures and performs local adjustments to the schedule to obtain a mapping. Experimental results on 35 top performance-critical loops from MiBench, Rodinia, and Parboil benchmark suites demonstrate that PathSeeker can map all of them with better mapping quality and dramatically less compilation time than the previous state-of-the-art approaches - GraphMinor and RAMP, which were unable to map 20 and 5 loops, respectively. Over these benchmarks, PathSeeker achieves 28% better performance at 550x compilation speedup over GraphMinor and 3% better performance at 10x compilation speedup over RAMP on a 4x4 CGRA.
OpenCGRA: An Open-Source Unified Framework for Modeling, Testing, and Evaluating CGRAs Coarse-grained reconfigurable arrays (CGRAs), loosely defined as arrays of functional units (e.g., adder, subtractor, multiplier, divider, or larger multi-operation units, but smaller than a general-purpose core) interconnected through a Network-on-Chip, provide higher flexibility than domain-specific ASIC accelerators while offering increased hardware efficiency with respect to fine-grained reconfigurable devices, such as Field Programmable Gate Arrays (FPGAs). The fast evolving fields of machine learning and edge computing, which are seeing a continuous flow of novel algorithms and larger models, make CGRAs ideal architectures to allow domain specialization without losing too much generality. Designing and generating a CGRA, however, still requires to define the type and number of the specific functional units, implement their interconnect and the network topology, and perform the simulation and validation, given a variety of workloads of interest. In this paper, we propose OpenC-GRA *, the first open-source integrated framework that is able to support the full top-to-bottom design flow for specializing and implementing CGRAs: modeling at different abstraction levels (functional level, cycle level, register-transfer level) with compiler support, verification at different granularities (unit testing, integration testing, property-based testing), simulation, generation of synthesizable Verilog, and characterization (area, power, and timing). By using OpenCGRA, it only takes a few hours to build a specialized power- and area-efficient CGRA throughout the entire design flow given a set of applications of interest. OpenCGRA is available online at https://github.com/pnnl/OpenCGRA.
Fifer: Practical Acceleration of Irregular Applications on Reconfigurable Architectures ABSTRACTCoarse-grain reconfigurable arrays (CGRAs) can achieve much higher performance and efficiency than general-purpose cores, approaching the performance of a specialized design while retaining programmability. Unfortunately, CGRAs have so far only been effective on applications with regular compute patterns. However, many important workloads like graph analytics, sparse linear algebra, and databases, are irregular applications with unpredictable access patterns and control flow. Since CGRAs map computation statically to a spatial fabric of functional units, irregular memory accesses and control flow cause frequent stalls and load imbalance. We present Fifer, an architecture and compilation technique that makes irregular applications efficient on CGRAs. Fifer first decouples irregular applications into a feed-forward network of pipeline stages. Each resulting stage is regular and can efficiently use the CGRA fabric. However, irregularity causes stages to have widely varying loads, resulting in high load imbalance if they execute spatially in a conventional CGRA. Fifer solves this by introducing dynamic temporal pipelining: it time-multiplexes multiple stages onto the same CGRA, and dynamically schedules stages to avoid load imbalance. Fifer makes time-multiplexing fast and cheap to quickly respond to load imbalance while retaining the efficiency and simplicity of a CGRA design. We show that Fifer improves performance by gmean 2.8 × (and up to 5.5 ×) over a conventional CGRA architecture (and by gmean 17 × over an out-of-order multicore) on a variety of challenging irregular applications.
Accelerating Dependent Cache Misses with an Enhanced Memory Controller. On-chip contention increases memory access latency for multicore processors. We identify that this additional latency has a substantial efect on performance for an important class of latency-critical memory operations: those that result in a cache miss and are dependent on data from a prior cache miss. We observe that the number of instructions between the frst cache miss and its dependent cache miss is usually small. To minimize dependent cache miss latency, we propose adding just enough functionality to dynamically identify these instructions at the core and migrate them to the memory controller for execution as soon as source data arrives from DRAM. This migration allows memory requests issued by our new Enhanced Memory Controller (EMC) to experience a 20% lower latency than if issued by the core. On a set of memory intensive quad-core workloads, the EMC results in a 13% improvement in system performance and a 5% reduction in energy consumption over a system with a Global History Bufer prefetcher, the highest performing prefetcher in our evaluation.
Memristor-Based Material Implication (IMPLY) Logic: Design Principles and Methodologies Memristors are novel devices, useful as memory at all hierarchies. These devices can also behave as logic circuits. In this paper, the IMPLY logic gate, a memristor-based logic circuit, is described. In this memristive logic family, each memristor is used as an input, output, computational logic element, and latch in different stages of the computing process. The logical state is determined by the resistance of the memristor. This logic family can be integrated within a memristor-based crossbar, commonly used for memory. In this paper, a methodology for designing this logic family is proposed. The design methodology is based on a general design flow, suitable for all deterministic memristive logic families, and includes some additional design constraints to support the IMPLY logic family. An IMPLY 8-bit full adder based on this design methodology is presented as a case study.
Randomized Smoothing for Stochastic Optimization. We analyze convergence rates of stochastic optimization algorithms for nonsmooth convex optimization problems. By combining randomized smoothing techniques with accelerated gradient methods, we obtain convergence rates of stochastic optimization procedures, both in expectation and with high probability, that have optimal dependence on the variance of the gradient estimates. To the best of our knowledge, these are the first variance-based rates for nonsmooth optimization. We give several applications of our results to statistical estimation problems and provide experimental results that demonstrate the effectiveness of the proposed algorithms. We also describe how a combination of our algorithm with recent work on decentralized optimization yields a distributed stochastic optimization algorithm that is order-optimal.
Slf-stabiliezing leader election in dynamic networks Three silent self-stabilizing asynchronous distributed algorithms are given for the leader election problem in a dynamic network with unique IDs, using the composite model of computation. A leader is elected for each connected component of the network. A BFS tree is also constructed in each component, rooted at the leader. This election takes O(Diam) rounds, where Diam is the maximum diameter of any component. Links and processes can be added or deleted, and data can be corrupted. After each such topological change or data corruption, the leader and BFS tree are recomputed if necessary. All three algorithms work under the unfair daemon. The three algorithms differ in their leadership stability. The first algorithm, which is the fastest in the worst case, chooses an arbitrary process as the leader. The second algorithm chooses the process of highest priority in each component, where priority can be defined in a variety of ways. The third algorithm has the strictest leadership stability. If the configuration is legitimate, and then any number of topological faults occur at the same time but no variables are corrupted, the third algorithm will converge to a new legitimate state in such a manner that no process changes its choice of leader more than once, and each component will elect a process which was a leader before the fault, provided there is at least one former leader in that component.
Enabling Wide-Area Service Oriented Architecture through the p2pWeb Model In this paper we present the p2pWeb service oriented architecture (SOA). The p2pWeb model offers decentralized solutions for service description, publication, discovery and availability, following the web services standards. The three innovative contributions in p2pWeb SOA are: easy integration of web services into a p2pWeb network, secure and decentralized web services deployment, and transparent location, load balancing and fault-tolerance peer-to-peer mechanisms. We believe that all the features our p2pWeb model offers can be of special interest for the creation of communities, and for the development of future collaborative decentralized applications.
Prediction-Based Stabilization of Linear Systems Subject to Input-Dependent Input Delay of Integral-Type In this paper, it is proved that a predictor-based feedback controller can effectively yield asymptotic convergence for a class of linear systems subject to input-dependent input delay. This class is characterized by the delay being implicitly related to past values of the input via an integral model. This situation is representative of systems where transport phenomena take place, as is frequent in the process industry. The sufficient conditions obtained for asymptotic stabilization bring a local result and require the magnitude of the feedback gain to be consistent with the initial conditions scale. Arguments of proof for this novel result include general Halanay inequalities for delay differential equations and build on recent advances of backstepping techniques for uncertain or varying delay systems.
A Data-Compressive Wired-OR Readout for Massively Parallel Neural Recording. Neural interfaces of the future will be used to help restore lost sensory, motor, and other capabilities. However, realizing this futuristic promise requires a major leap forward in how electronic devices interface with the nervous system. Next generation neural interfaces must support parallel recording from tens of thousands of electrodes within the form factor and power budget of a fully implan...
1.102108
0.1
0.1
0.1
0.1
0.05
0.033333
0.00081
0.000077
0
0
0
0
0
Input-adaptive dual-output power management unit for energy harvesting devices An input-adaptive dual-output charge pump (DOQP) with variable fractional conversion ratio and low dropout regulators (LDRs) in cascade is implemented for the power management unit (PMU) of implantable energy harvesting devices. The charge pump has one step-down and one step-up output adaptively converted from a 1.8 to 4.0V harvested energy source, and the outputs of the LDRs are 1V and 3V respectively. To improve the overall efficiency, conversion ratios of k/6 (k=2,..., 12) are realized by 1/2- and 1/3-capacitors using an interleaving scheme. The PMU is designed using a 0.13μm 3.3V CMOS process, and attains the peak efficiency of 81.3% and efficiency better than 55% for a wide input range.
Analysis and Design Strategy of On-Chip Charge Pumps for Micro-power Energy Harvesting Applications.
A 22 nm All-Digital Dynamically Adaptive Clock Distribution for Supply Voltage Droop Tolerance An all-digital dynamically adaptive clock distribution mitigates the impact of high-frequency supply voltage droops on microprocessor performance and energy efficiency. The design integrates a tunable-length delay prior to the global clock distribution to prolong the clock-data delay compensation in critical paths during a droop. The tunable-length delay prevents critical-path timing-margin degradation for multiple cycles after the droop occurs, thus allowing a sufficient response time for dynamic adaptation. An on-die dynamic variation monitor detects the onset of the droop to proactively gate the clock at the end of the tunable-length delay to eliminate the clock edges that would otherwise degrade critical-path timing margin. In comparison to a conventional clock distribution, silicon measurements from a 22 nm test chip demonstrate simultaneous throughput gains and energy reductions of 14% and 3% at 1.0 V, 18% and 5% at 0.8 V, and 31% and 15% at 0.6 V, respectively, for a 10% droop.
17.11 A 0.65ns-response-time 3.01ps FOM fully-integrated low-dropout regulator with full-spectrum power-supply-rejection for wideband communication systems High performance low-dropout regulators (LDOs) are indispensable in a systemon-a-chip (SoC) due to their low output noise, fast transient response and good power supply rejection (PSR) characteristics. In general, differential analog circuit loads need an LDO with high PSR, digital circuit loads need an LDO with fast load transient response, while single-ended analog/RF circuit loads need an LDO with both high PSR and fast transient response. Figure 17.11.1 shows an LDO embedded in an optical receiver that helps improve the sensitivity of the front-end system. On-chip LDOs with PSR in the GHz range are in high demand for wideband optical communication systems because there is only one photo detector in the optical receiver and supply voltage variations would degrade its sensitivity severely.
A capacitor-free CMOS low-dropout regulator with damping-factor-control frequency compensation A 1.5-V 100-mA capacitor-free CMOS low-dropout regulator (LDO) for system-on-chip applications to reduce board space and external pins is presented. By utilizing damping-factor-control frequency compensation on the advanced LDO structure, the proposed LDO provides high stability, as well as fast line and load transient responses, even in capacitor-free operation. The proposed LDO has been implemented in a commercial 0.6-μm CMOS technology, and the active chip area is 568 μm×541 μm. The total error of the output voltage due to line and load variations is less than ±0.25%, and the temperature coefficient is 38 ppm/°C. Moreover, the output voltage can recover within 2 μs for full load-current changes. The power-supply rejection ratio at 1 MHz is -30 dB, and the output noise spectral densities at 100 Hz and 100 kHz are 1.8 and 0.38 μV/√Hz, respectively.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
When hardware is free, power is expensive! Is integrated power management the solution? In the last several years, significant efforts and advances have been made towards the CMOS integration of power converters. In this paper, an overview is given of what might be considered the next step in this domain: AC-DC conversion, efficient high-ratio voltage conversion, wide operating range and energy storage for energy scavenging. The main focus is on CMOS integration as this is the ultimate goal from any system integration point of view. Also, an overview of the state of the art will be discussed.
Fully integrated wideband high-current rectifiers for inductively powered devices This paper describes the design and implementation of fully integrated rectifiers in BiCMOS and standard CMOS technologies for rectifying an externally generated RF carrier signal in inductively powered wireless devices, such as biomedical implants, radio-frequency identification (RFID) tags, and smartcards to generate an on-chip dc supply. Various full-wave rectifier topologies and low-power circuit design techniques are employed to decrease substrate leakage current and parasitic components, reduce the possibility of latch-up, and improve power transmission efficiency and high-frequency performance of the rectifier block. These circuits are used in wireless neural stimulating microsystems, fabricated in two processes: the University of Michigan's 3-μm 1M/2P N-epi BiCMOS, and the AMI 1.5-μm 2M/2P N-well standard CMOS. The rectifier areas are 0.12-0.48 mm2 in the above processes and they are capable of delivering >25mW from a receiver coil to the implant circuitry. The performance of these integrated rectifiers has been tested and compared, using carrier signals in 0.1-10-MHz range.
Merged Two-Stage Power Converter With Soft Charging Switched-Capacitor Stage in 180 nm CMOS In this paper, we introduce a merged two-stage dc-dc power converter for low-voltage power delivery. By separating the transformation and regulation function of a dc-dc power converter into two stages, both large voltage transformation and high switching frequency can be achieved. We show how the switched-capacitor stage can operate under soft charging conditions by suitable control and integration (merging) of the two stages. This mode of operation enables improved efficiency and/or power density in the switched-capacitor stage. A 5-to-1 V, 0.8 W integrated dc-dc converter has been developed in 180 nm CMOS. The converter achieves a peak efficiency of 81%, with a regulation stage switching frequency of 10 MHz.
ADRES: An Architecture with Tightly Coupled VLIW Processor and Coarse-Grained Reconfigurable Matrix The coarse-grained reconfigurable architectures have advantages over the traditional FPGAs in terms of delay, area and configuration time. To execute entire applications, most of them combine an instruction set processor(ISP) and a reconfigurable matrix. However, not much attention is paid to the integration of these two parts, which results in high communication overhead and programming difficulty. To address this problem, we propose a novel architecture with tightly coupled very long instruction word (VLIW) processor and coarse-grained reconfigurable matrix. The advantages include simplified programming model, shared resource costs, and reduced communication overhead. To exploit this architecture, our previously developed compiler framework is adapted to the new architecture. The results show that the new architecture has good performance and is very compiler-friendly.
A 6.5 GHz wideband CMOS low noise amplifier for multi-band use LNA based on a noise-cancelled common gate topology spans 0.1 to 6.5 GHz with a gain of 19 dB, a NF of 3 dB, and s11 < -10 dB. It is realized in 0.13-mum CMOS and dissipates 12 mW
Fast-Transient DC–DC Converter With On-Chip Compensated Error Amplifier A fast-transient dc-dc converter with on-chip compensated error amplifier is presented in this paper. The error amplifier uses three transistors and one voltage follower to implement the on-chip current-mode Miller capacitor. Not only on-chip compensated error amplifier is implemented without off-chip components and I/O pins, but also fast-transient response is achieved. Moreover, we accurately de...
Towards elastic SDR architectures using dynamic task management. SDR platforms integrating several types and numbers of processing elements in System-on-Chips become an attractive solution for baseband processing in wireless systems. In order to cope with the diversity of protocol applications and the heterogeneity of multi-core architectures, a hierarchical approach for workload distribution is proposed in this paper. Specifically, a system-level scheduler is employed to map applications to multiple processing clusters, complemented with a cluster-level scheduler - the CoreManager - for dynamic resource allocation and configuration as well as for task and data scheduling. A performance analysis of the proposed approach is presented, which shows the advantages of dynamic scheduling against a static approach for variable workloads in the LTE-Advanced uplink multi-user scenarios.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.20431
0.102955
0.102495
0.06833
0.041864
0.003401
0.0012
0.000012
0
0
0
0
0
0
Simultaneous Multi-Layer Access: Improving 3D-Stacked Memory Bandwidth at Low Cost 3D-stacked DRAM alleviates the limited memory bandwidth bottleneck that exists in modern systems by leveraging through silicon vias (TSVs) to deliver higher external memory channel bandwidth. Today's systems, however, cannot fully utilize the higher bandwidth offered by TSVs, due to the limited internal bandwidth within each layer of the 3D-stacked DRAM. We identify that the bottleneck to enabling higher bandwidth in 3D-stacked DRAM is now the global bitline interface, the connection between the DRAM row buffer and the peripheral IO circuits. The global bitline interface consists of a limited and expensive set of wires and structures, called global bitlines and global sense amplifiers, whose high cost makes it difficult to simply scale up the bandwidth of the interface within a single DRAM layer in the 3D stack. We alleviate this bandwidth bottleneck by exploiting the observation that several global bitline interfaces already exist across the multiple DRAM layers in current 3D-stacked designs, but only a fraction of them are enabled at the same time. We propose a new 3D-stacked DRAM architecture, called Simultaneous Multi-Layer Access (SMLA), which increases the internal DRAM bandwidth by accessing multiple DRAM layers concurrently, thus making much greater use of the bandwidth that the TSVs offer. To avoid channel contention, the DRAM layers must coordinate with each other when simultaneously transferring data. We propose two approaches to coordination, both of which deliver four times the bandwidth for a four-layer DRAM, over a baseline that accesses only one layer at a time. Our first approach, Dedicated-IO, statically partitions the TSVs by assigning each layer to a dedicated set of TSVs that operate at a higher frequency. Unfortunately, Dedicated-IO requires a nonuniform design for each layer (increasing manufacturing costs), and its DRAM energy consumption scales linearly with the number of layers. Our second approach, Cascaded-IO, solves both issues by instead time multiplexing all of the TSVs across layers. Cascaded-IO reduces DRAM energy consumption by lowering the operating frequency of higher layers. Our evaluations show that SMLA provides significant performance improvement and energy reduction across a variety of workloads (55%/18% on average for multiprogrammed workloads, respectively) over a baseline 3D-stacked DRAM, with low overhead.
The Influence of the Sigmoid Function Parameters on the Speed of Backpropagation Learning Sigmoid function is the most commonly known function used in feed forward neural networks because of its nonlinearity and the computational simplicity of its derivative. In this paper we discuss a variant sigmoid function with three parameters that denote the dynamic range, symmetry and slope of the function respectively. We illustrate how these parameters influence the speed of backpropagation learning and introduce a hybrid sigmoidal network with different parameter configuration in different layers. By regulating and modifying the sigmoid function parameter configuration in different layers the error signal problem, oscillation problem and asymmetrical input problem can be reduced. To compare the learning capabilities and the learning rate of the hybrid sigmoidal networks with the conventional networks we have tested the two-spirals benchmark that is known to be a very difficult task for backpropagation and their relatives.
Design, packaging, and architectural policy co-optimization for DC power integrity in 3D DRAM 3D DRAM is the next-generation memory system targeting high bandwidth, low power, and small form factor. This paper presents a cross-domain CAD/architectural platform that addresses DC power noise issues in 3D DRAM targeting stacked DDR3, Wide I/O, and hybrid memory cube technologies. Our design and analysis include both individual DRAM dies and a host logic die that communicates with them in the same stack. Moreover, our comprehensive solutions encompass all major factors in design, packaging, and architecture domains, including power delivery network wire sizing, redistribution layer routing, distributed, and dedicated TSV placement, die bonding style, backside wire bonding, and read policy optimization. We conduct regression analysis and optimization to obtain high quality solutions under noise, cost, and performance tradeoff. Compared with industry standard baseline designs and policies, our methods achieve up to 68.2% IR-drop reduction and 30.6% performance enhancement.
Enabling Practical Processing in and near Memory for Data-Intensive Computing Modern computing systems suffer from the dichotomy between computation on one side, which is performed only in the processor (and accelerators), and data storage/movement on the other, which all other parts of the system are dedicated to. Due to this dichotomy, data moves a lot in order for the system to perform computation on it. Unfortunately, data movement is extremely expensive in terms of energy and latency, much more so than computation. As a result, a large fraction of system energy is spent and performance is lost solely on moving data in a modern computing system. In this work, we re-examine the idea of reducing data movement by performing Processing in Memory (PIM). PIM places computation mechanisms in or near where the data is stored (i.e., inside the memory chips, in the logic layer of 3D-stacked logic and DRAM, or in the memory controllers), so that data movement between the computation units and memory is reduced or eliminated. While the idea of PIM is not new, we examine two new approaches to enabling PIM: 1) exploiting analog properties of DRAM to perform massively-parallel operations in memory, and 2) exploiting 3D-stacked memory technology design to provide high bandwidth to in-memory logic. We conclude by discussing work on solving key challenges to the practical adoption of PIM.
Concurrent Data Structures for Near-Memory Computing. The performance gap between memory and CPU has grown exponentially. To bridge this gap, hardware architects have proposed near-memory computing (also called processing-in-memory, or PIM), where a lightweight processor (called a PIM core) is located close to memory. Due to its proximity to memory, a memory access from a PIM core is much faster than that from a CPU core. New advances in 3D integration and die-stacked memory make PIM viable in the near future. Prior work has shown significant performance improvements by using PIM for embarrassingly parallel and data-intensive applications, as well as for pointer-chasing traversals in sequential data structures. However, current server machines have hundreds of cores, and algorithms for concurrent data structures exploit these cores to achieve high throughput and scalability, with significant benefits over sequential data structures. Thus, it is important to examine how PIM performs with respect to modern concurrent data structures and understand how concurrent data structures can be developed to take advantage of PIM. This paper is the first to examine the design of concurrent data structures for PIM. We show two main results: (1) naive PIM data structures cannot outperform state-of-the-art concurrent data structures, such as pointer-chasing data structures and FIFO queues, (2) novel designs for PIM data structures, using techniques such as combining, partitioning and pipelining, can outperform traditional concurrent data structures, with a significantly simpler design.
The DRAM Latency PUF: Quickly Evaluating Physical Unclonable Functions by Exploiting the Latency-Reliability Tradeoff in Modern Commodity DRAM Devices Physically Unclonable Functions (PUFs) are commonly used in cryptography to identify devices based on the uniqueness of their physical microstructures. DRAM-based PUFs have numerous advantages over PUF designs that exploit alternative substrates: DRAM is a major component of many modern systems, and a DRAM-based PUF can generate many unique identiers. However, none of the prior DRAM PUF proposals provide implementations suitable for runtime-accessible PUF evaluation on commodity DRAM devices. Prior DRAM PUFs exhibit unacceptably high latencies, especially at low temperatures (e.g., >125.8s on average for a 64KiB memory segment below 55C), and they cause high system interference by keeping part of DRAM unavailable during PUF evaluation. In this paper, we introduce the DRAM latency PUF, a new class of fast, reliable DRAM PUFs. The key idea is to reduce DRAM read access latency below the reliable datasheet specications using software-only system calls. Doing so results in error patterns that reect the compound eects of manufacturing variations in various DRAM structures (e.g., capacitors, wires, sense ampli- ers). Based on a rigorous experimental characterization of 223 modern LPDDR4 DRAM chips, we demonstrate that these error patterns 1) satisfy runtime-accessible PUF requirements, and 2) are quickly generated (i.e., at 88.2ms) irrespective of operating temperature using a real system with no additional hardware modications. We show that, for a constant DRAM capacity overhead of 64KiB, our implementation of the DRAM latency PUF enables an average (minimum, maximum) PUF evaluation time speedup of 152x (109x, 181x) at 70C and 1426x (868x, 1783x) at 55C when compared to a DRAM retention PUF and achieves greater speedups at even lower temperatures.
In-Memory Data Rearrangement for Irregular, Data-Intensive Computing The data rearrangement engine (DRE) performs in-memory data restructuring to accelerate irregular, data-intensive applications. An emulation on a field-programmable gate array shows how the DRE could improve speedup, memory bandwidth, and energy consumption on three representative benchmarks.
A scalable processing-in-memory accelerator for parallel graph processing The explosion of digital data and the ever-growing need for fast data analysis have made in-memory big-data processing in computer systems increasingly important. In particular, large-scale graph processing is gaining attention due to its broad applicability from social science to machine learning. However, scalable hardware design that can efficiently process large graphs in main memory is still an open problem. Ideally, cost-effective and scalable graph processing systems can be realized by building a system whose performance increases proportionally with the sizes of graphs that can be stored in the system, which is extremely challenging in conventional systems due to severe memory bandwidth limitations. In this work, we argue that the conventional concept of processing-in-memory (PIM) can be a viable solution to achieve such an objective. The key modern enabler for PIM is the recent advancement of the 3D integration technology that facilitates stacking logic and memory dies in a single package, which was not available when the PIM concept was originally examined. In order to take advantage of such a new technology to enable memory-capacity-proportional performance, we design a programmable PIM accelerator for large-scale graph processing called Tesseract. Tesseract is composed of (1) a new hardware architecture that fully utilizes the available memory bandwidth, (2) an efficient method of communication between different memory partitions, and (3) a programming interface that reflects and exploits the unique hardware design. It also includes two hardware prefetchers specialized for memory access patterns of graph processing, which operate based on the hints provided by our programming model. Our comprehensive evaluations using five state-of-the-art graph processing workloads with large real-world graphs show that the proposed architecture improves average system performance by a factor of ten and achieves 87% average energy reduction over conventional systems.
Chipyard: Integrated Design, Simulation, and Implementation Framework for Custom SoCs Continued improvement in computing efficiency requires functional specialization of hardware designs. Agile hardware design methodologies have been proposed to alleviate the increased design costs of custom silicon architectures, but their practice thus far has been accompanied with challenges in integration and validation of complex systems-on-a-chip (SoCs). We present the Chipyard framework, an integrated SoC design, simulation, and implementation environment for specialized compute systems. Chipyard includes configurable, composable, open-source, generator-based IP blocks that can be used across multiple stages of the hardware development flow while maintaining design intent and integration consistency. Through cloud-hosted FPGA accelerated simulation and rapid ASIC implementation, Chipyard enables continuous validation of physically realizable customized systems.
RecNMP: Accelerating Personalized Recommendation with Near-Memory Processing Personalized recommendation systems leverage deep learning models and account for the majority of data center AI cycles. Their performance is dominated by memory-bound sparse embedding operations with unique irregular memory access patterns that pose a fundamental challenge to accelerate. This paper proposes a lightweight, commodity DRAM compliant, near-memory processing solution to accelerate personalized recommendation inference. The in-depth characterization of production-grade recommendation models shows that embedding operations with high model-, operator- and data-level parallelism lead to memory bandwidth saturation, limiting recommendation inference performance. We propose RecNMP which provides a scalable solution to improve system throughput, supporting a broad range of sparse embedding models. RecNMP is specifically tailored to production environments with heavy co-location of operators on a single server. Several hardware/software co-optimization techniques such as memory-side caching, table-aware packet scheduling, and hot entry profiling are studied, providing up to $9.8 \times$ memory latency speedup over a highly-optimized baseline. Overall, RecNMP offers $4.2 \times$ throughput improvement and 45.8% memory energy savings.
Information spreading in stationary Markovian evolving graphs Markovian evolving graphs [2] are dynamic-graph models where the links among a fixed set of nodes change during time according to an arbitrary Markovian rule. They are extremely general and they can well describe important dynamic-network scenarios.
Mobile Network Resource Sharing Options: Performance Comparisons. Resource sharing among mobile network operators is a promising way to tackle growing data demand by increasing capacity and reducing costs of network infrastructure deployment and operation. In this work, we evaluate sharing options that range from simple approaches that are feasible in the near-term on traditional infrastructure to complex methods that require specialized/virtualized infrastructure. We build a simulation testbed supporting two geographically overlapped 4G LTE macro cellular networks and model the sharing architecture/process between the network operators. We compare Capacity Sharing (CS) and Spectrum Sharing (SS) on traditional infrastructure and Virtualized Spectrum Sharing (VSS) and Virtualized PRB Sharing (VPS) on virtualized infrastructure under light, moderate and heavy user loading scenarios in collocated and noncollocated E-UTRAN deployment topologies. We also study these sharing options in conservative and aggressive sharing participation modes. Based on simulation results, we conclude that CS, a generalization of traditional roaming, is the best performing and simplest option, SS is least effective and that VSS and VPS perform better than spectrum sharing with added complexity.
An Introduction to Reconfigurable Systems Reconfigurability can be thought of as software-defined functionality, where flexibility is controlled predominately through the specification of bit patterns. Reconfigurable systems can be as simple as a single switch, or as abstract and powerful as programmable matter. This paper considers the generalization of reconfigurable systems as an important evolving discipline, bolstered by real-world archetypes such as field programmable gate arrays and software-definable radio (platform and application, respectively). It considers what reconfigurable systems actually are, their motivation, their taxonomy, the fundamental mechanisms and architectural considerations underlying them, designing them and using them in applications. With well-known real-world instances, such as the field programmable gate array, the paper attempts to motivate an understanding of the many possible directions and implications of a new class of system which is fundamentally based on the ability to change.
A VCO-Based Nonuniform Sampling ADC Using a Slope-Dependent Pulse Generator This paper presents a voltage-controlled oscillator (VCO)-based nonuniform sampling analog-to-digital converter (ADC) as an alternative to the level-crossing (LC)-based converters for digitizing biopotential signals. This work aims to provide a good signal-to-noise-and-distortion ratio at a low average sampling rate. In the proposed conversion method, a slope-dependent pulse generation block is used to provide a variable sample rate adjusted according to the input signal's slope. Simulation results show that the introduced method meets a target reconstruction quality with a sampling rate approaching 92 Sps, while on the same MIT-BIH Arrhythmia N 106 ECG benchmark, the classic LC-based approach requires a sampling rate higher than 500 Sps. The benefits of the proposed method are more remarkable when the input signal is very noisy. The proposed ADC achieves a compression ratio close to 4, but with only 5.4% root-mean-square difference when tested using the MIT-BIH Arrhythmia Database.
1.023645
0.022222
0.022222
0.022222
0.015118
0.011111
0.009306
0.005939
0.000444
0.000023
0
0
0
0
Client-edge-cloud hierarchical federated learning Federated Learning is a collaborative machine learning framework to train a deep learning model without accessing clients’ private data. Previous works assume one central parameter server either at the cloud or at the edge. The cloud server can access more data but with excessive communication overhead and long latency, while the edge server enjoys more efficient communications with the clients. To combine their advantages, we propose a client-edge-cloud hierarchical Federated Learning system, supported with a HierFAVG algorithm that allows multiple edge servers to perform partial model aggregation. In this way, the model can be trained faster and better communication-computation trade-offs can be achieved. Convergence analysis is provided for HierFAVG and the effects of key parameters are also investigated, which lead to qualitative design guidelines. Empirical experiments verify the analysis and demonstrate the benefits of this hierarchical architecture in different data distribution scenarios. Particularly, it is shown that by introducing the intermediate edge servers, the model training time and the energy consumption of the end devices can be simultaneously reduced compared to cloud-based Federated Learning.
Federated Learning: Challenges, Methods, and Future Directions Federated learning involves training statistical models over remote devices or siloed data centers, such as mobile phones or hospitals, while keeping data localized. Training in heterogeneous and potentially massive networks introduces novel challenges that require a fundamental departure from standard approaches for large-scale machine learning, distributed optimization, and privacy-preserving data analysis. In this article, we discuss the unique characteristics and challenges of federated learning, provide a broad overview of current approaches, and outline several directions of future work that are relevant to a wide range of research communities.
AIR Tools - A MATLAB package of algebraic iterative reconstruction methods We present a MATLAB package with implementations of several algebraic iterative reconstruction methods for discretizations of inverse problems. These so-called row action methods rely on semi-convergence for achieving the necessary regularization of the problem. Two classes of methods are implemented: Algebraic Reconstruction Techniques (ART) and Simultaneous Iterative Reconstruction Techniques (SIRT). In addition we provide a few simplified test problems from medical and seismic tomography. For each iterative method, a number of strategies are available for choosing the relaxation parameter and the stopping rule. The relaxation parameter can be fixed, or chosen adaptively in each iteration; in the former case we provide a new ''training'' algorithm that finds the optimal parameter for a given test problem. The stopping rules provided are the discrepancy principle, the monotone error rule, and the NCP criterion; for the first two methods ''training'' can be used to find the optimal discrepancy parameter.
A Survey on Network Methodologies for Real-Time Analytics of Massive IoT Data and Open Research Issues. With the widespread adoption of the Internet of Things (IoT), the number of connected devices is growing at an exponential rate, which is contributing to ever-increasing, massive data volumes. Real-time analytics on the massive IoT data, referred to as the “real-time IoT analytics” in this paper, is becoming the mainstream with an aim to provide an immediate or non-immediate actionable insights an...
Computation Offloading Toward Edge Computing We are living in a world where massive end devices perform computing everywhere and everyday. However, these devices are constrained by the battery and computational resources. With the increasing number of intelligent applications (e.g., augmented reality and face recognition) that require much more computational power, they shift to perform computation offloading to the cloud, known as mobile cloud computing (MCC). Unfortunately, the cloud is usually far away from end devices, leading to a high latency as well as the bad quality of experience (QoE) for latency-sensitive applications. In this context, the emergence of edge computing is no coincidence. Edge computing extends the cloud to the edge of the network, close to end users, bringing ultra-low latency and high bandwidth. Consequently, there is a trend of computation offloading toward edge computing. In this paper, we provide a comprehensive perspective on this trend. First, we give an insight into the architecture refactoring in edge computing. Based on that insight, this paper reviews the state-of-the-art research on computation offloading in terms of application partitioning, task allocation, resource management, and distributed execution, with highlighting features for edge computing. Then, we illustrate some disruptive application scenarios that we envision as critical drivers for the flourish of edge computing, such as real-time video analytics, smart “things” (e.g., smart city and smart home), vehicle applications, and cloud gaming. Finally, we discuss the opportunities and future research directions.
Adaptive Multivariate Data Compression in Smart Metering Internet of Things Recent advances in electric metering infrastructure have given rise to the generation of gigantic chunks of data. Transmission of all of these data certainly poses a significant challenge in bandwidth and storage constrained Internet of Things (IoT), where smart meters act as sensors. In this work, a novel multivariate data compression scheme is proposed for smart metering IoT. The proposed algorithm exploits the cross correlation between different variables sensed by smart meters to reduce the dimension of data. Subsequently, sparsity in each of the decorrelated streams is utilized for temporal compression. To examine the quality of compression, the multivariate data is characterized using multivariate normal-autoregressive integrated moving average modeling before compression as well as after reconstruction of the compressed data. Our performance studies indicate that compared to the state-of-the-art, the proposed technique is able to achieve impressive bandwidth saving for transmission of data over communication network without compromising faithful reconstruction of data at the receiver. The proposed algorithm is tested in a real smart metering setup and its time complexity is also analyzed.
Cache operations by MRU change The performance of set associative caches is analyzed. The method used is to group the cache lines into regions according to their positions in the replacement stacks of a cache, and then to observe how the memory access of a CPU is distributed over these regions. Results from the preserved CPU traces show that the memory accesses are heavily concentrated on the most recently used (MRU) region in the cache. The concept of MRU change is introduced; the idea is to use the event that the CPU accesses a non-MRU line to approximate the time the CPU is changing its working set. The concept is shown to be useful in many aspects of cache design and performance evaluation, such as comparison of various replacement algorithms, improvement of prefetch algorithms, and speedup of cache simulation.
Broadband MIMO-OFDM Wireless Communications Orthogonal frequency division multiplexing (OFDM) is a popular method for high data rate wireless transmission. OFDM may be combined with antenna arrays at the transmitter and receiver to increase the diversity gain and/or to enhance the system capacity on time-varying and frequency-selective channels, resulting in a multiple-input multiple-output (MIMO) configuration. The paper explores various p...
Energy-Efficient Communication Protocol for Wireless Microsensor Networks Wireless distributed micro-sensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks.Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multihop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster base stations (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic net-works, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show that LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional routing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.
Type-2 Fuzzy Sets and Systems: An Overview [corrected reprint] As originally published in the February 2007 issue of IEEE Computational Intelligence Magazine, the above titled paper (ibid., vol. 2, no. 1, pp. 20-29, Feb 07) contained errors in mathematics that were introduced by the publisher. The corrected version is reprinted in its entirety.
Wideband Balun-LNA With Simultaneous Output Balancing, Noise-Canceling and Distortion-Canceling An inductorless low-noise amplifier (LNA) with active balun is proposed for multi-standard radio applications between 100 MHz and 6 GHz. It exploits a combination of a common-gate (CGH) stage and an admittance-scaled common-source (CS) stage with replica biasing to maximize balanced operation, while simultaneously canceling the noise and distortion of the CG-stage. In this way, a noise figure (NF) close to or below 3 dB can be achieved, while good linearity is possible when the CS-stage is carefully optimized. We show that a CS-stage with deep submicron transistors can have high IIP2, because the nugsldr nuds cross-term in a two-dimensional Taylor approximation of the IDS(VGS, VDS) characteristic can cancel the traditionally dominant square-law term in the IDS(VGS) relation at practical gain values. Using standard 65 nm transistors at 1.2 V supply voltage, we realize a balun-LNA with 15 dB gain, NF < 3.5 dB and IIP2 > +20 dBm, while simultaneously achieving an IIP3 > 0 dBm. The best performance of the balun is achieved between 300 MHz to 3.5 GHz with gain and phase errors below 0.3 dB and plusmn2 degrees. The total power consumption is 21 mW, while the active area is only 0.01 mm2.
Opportunistic Information Dissemination in Mobile Ad-hoc Networks: The Profit of Global Synchrony The topic of this paper is the study of Information Dissemination in Mobile Ad-hoc Networks by means of deterministic protocols. We characterize the connectivity resulting from the movement, from failures and from the fact that nodes may join the computation at different times with two values, � and �, so that, withintime slots, some node that has the information must be connected to some node without it for at leasttime slots. The protocols studied are clas- sified into three classes: oblivious (the transmission schedule of a node is only a function of its ID), quasi-oblivious (the transmission schedule may also depend on a global time), and adaptive. The main contribution of this work concerns negative results. Contrasting the lower and upper bounds derived, interesting complexity gaps among protocol- classes are observed. More precisely, in order to guarantee any progress towards solving the problem, it is shown thatmust be at least n 1 in general, but that � 2 (n 2 /log n) if an oblivious protocol is used. Since quasi-oblivious protocols can guarantee progress with � 2 O(n), this represents a significant gap, almost linear in �, between oblivious and quasi-oblivious protocols. Regarding the time to complete the dissemination, a lower bound of (n� + n 3 /log n) is proved for oblivious protocols, which is tight up to a polylogarithmic factor because a constructive O(n� + n 3 log n) upper bound exists for the same class. It is also proved that adaptive protocols require (n� + n 2 ), which is optimal given that a matching upper bound can be proved for quasi-oblivious protocols. These results show that the gap in time complexity between oblivious and quasi- oblivious, and hence adaptive, protocols is almost linear. This gap is what we call the profit of global synchrony, since it represents the gain the network obtains from global synchrony with respect to not having it.
Implementation of LTE SC-FDMA on the USRP2 software defined radio platform In this paper we discuss the implementation of a Single Carrier Frequency Division Multiple Access (SC-FDMA) transceiver running over the Universal Software Radio Peripheral 2 (USRP2). SC-FDMA is the air interface which has been selected for the uplink in the latest Long Term Evolution (LTE) standard. In this paper we derive an AWGN channel model for SC-FDMA transmission, which is useful for benchmarking experimental results. In our implementation, we deal with signal scaling, equalization and partial synchronization to realize SC-FDMA transmission over a noisy channel at rates up to 5.184 Mbit/s. Experimental results on the Bit Error Rate (BER) versus Signal-to-Noise Ratio (SNR) are presented and compared to theoretical and simulated performance.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
Data Collection Using RFID and a Mobile Reader The widespread acceptance of RFID tags in inventory control applications has allowed the development of increasingly complex and useful inventory management techniques. This paper approaches the problem of real-time inventory querying in a large warehouse. We use a combination of powered, wireless- capable active RFID tags and a mobile RFID reader to design a querying system that uses both a mesh network formed with the active RFID tags and the mobile reader to balance query latency and total network lifetime. We implemented and simulated our system on Crossbow MicaZ motes and a custom robot platform. Our results show that our hybrid algorithm balances query latency and network lifetime more effectively than mesh network- only and mobile reader-only algorithms. In particular, we see network power consumption reduction of up to 60% and a 15% reduction in total distance travelled by the mobile reader. We conclude that introducing a mobile reader to active tag networks allows the design of flexible network algorithms that can reduce the impact of battery life on the network.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A multistage filterbank-based channelizer and its multiplier-less realization This paper proposes a multistage filterbank-based channelizer for software radio base stations. The proposed channelizer is capable of receiving channels with potentially different bandwidths as required in a multi-standard cellular based station. It consists of multiple stages of DFT filter banks and efficient sample rate changers. The front-end DFT filter bank of the channelizer has a fixed number of channels but the passband supports overlap with each other. The received signals selected by a given output of this filter bank are fed into sample rate changers so that they can fit into the fixed channel spacing of the DFT filter banks in the following stages. Due to the lowered sample rate, these back-end DFT filter banks can have either fixed or variable number of channels. Repeatedly using this multistage architecture, channels with different bandwidths can be isolated. The design and implementation of the proposed channelizer are discussed in detail. An example of dual-mode GSM/W-CDMA channelizer is also discussed to illustrate the proposed design methodology.
Efficient filterbank channelizers for software radio receivers For cellular software radio receivers, this paper presents a computationally efficient algorithm for extracting individual radio channels from the output of the wideband A/D converter. In a software radio, the extraction of individual channels from the output of the wide band A/D converter is by far the most computationally demanding task; hence, it is very important to devise computationally efficient algorithms for this task. Our algorithm is obtained by modifying the DFT filter bank structure that is well known in the multi-rate signal processing literature (Scheuermann and Gockler, 1981; Vaidyanathan, 1990). We show that the complexity of the proposed algorithm is significantly less (2×-50×) than the complexity of the conventional channelizers
A Low-Power Digit-Based Reconfigurable FIR Filter In this brief, we present a digit-reconfigurable finite-impulse response (FIR) filter architecture with a very fine granularity. It provides a flexible yet compact and low-power solution to FIR filters with a wide range of precision and tap length. Based on the proposed architecture, an 8-digit reconfigurable FIR filter chip is implemented in a single-poly quadruple-metal 0.35-mu m CMOS technology. Measurement results show that the fabricated chip operates up to 86 MHz when the filter draws 16.5 mW of power from a 23-V power supply.
Efficient wideband channelizer for software radio systems using modulated PR filterbanks An efficient method is proposed for channelizing frequency division multiplexed (FDM) channels in wideband software radio (SWR) received signals that do not satisfy the conditions required for polyphase decomposition of the discrete filterbank (DFB) channelizer. The proposed method, which uses modulated perfect reconstruction (PR) filterbanks, requires fewer computations than DFBs for channelizing wideband signals that are composed of FDM channels of nonequal bandwidths, especially when a large number of channels are extracted. The proposed channelizer, if applied in the reverse direction, can be used to synthesize a set of channels with nonequal bandwidths into a single wideband signal in SWR transmitters. A method is also proposed for efficiently designing the modulated PR filterbanks, which have a large number of subchannels and prototype filters with high stopband attenuations that are used in the proposed channelizer. The computational complexity of the proposed channelizer is compared with the complexity of the DFB channelizer for channelizing the wideband and high-dynamic-range signals that are typical of SWR systems, and simulation results of the proposed channelization method are discussed.
The software radio concept Since early 1980 an exponential blowup of cellular mobile systems has been observed, which has produced, all over the world, the definition of a plethora of analog and digital standards. In 2000 the industrial competition between Asia, Europe, and America promises a very difficult path toward the definition of a unique standard for future mobile systems, although market analyses underline the trading benefits of a common worldwide standard. It is therefore in this field that the software radio concept is emerging as a potential pragmatic solution: a software implementation of the user terminal able to dynamically adapt to the radio environment in which it is, time by time, located. In fact, the term software radio stands for radio functionalities defined by software, meaning the possibility to define by software the typical functionality of a radio interface, usually implemented in TX and RX equipment by dedicated hardware. The presence of the software defining the radio interface necessarily implies the use of DSPs to replace dedicated hardware, to execute, in real time, the necessary software. In this article objectives, advantages, problem areas, and technological challenges of software radio are addressed. In particular, SW radio transceiver architecture, possible SW implementation, and its download are analyzed
All-digital TX frequency synthesizer and discrete-time receiver for Bluetooth radio in 130-nm CMOS We present a single-chip fully compliant Bluetooth radio fabricated in a digital 130-nm CMOS process. The transceiver is architectured from the ground up to be compatible with digital deep-submicron CMOS processes and be readily integrated with a digital baseband and application processor. The conventional RF frequency synthesizer architecture, based on the voltage-controlled oscillator and the ph...
Max-Min D-Cluster Formation in Wireless Ad Hoc Networks An ad hoc network may be logically represented as a set of clusters. The clusterheads form a -hop dominating set. Each node is at most hops from a clusterhead. Clusterheads form a virtual backbone and may be used to route packets for nodes in their cluster. Previous heuristics restricted themselves to -hop clusters. We show that the minimum -hop dominating set problem is NP-complete. Then we present a heuristic to form -clusters in a wireless ad hoc network. Nodes are assumed to have non-deterministic mobility pattern. Clusters are formed by diffusing node identities along the wireless links. When the heuristic terminates, a node either becomes a clusterhead, or is at most wireless hops away from its clusterhead. The value of is a parameter of the heuristic. The heuristic can be run either at regular intervals, or whenever the network configura- tion changes. One of the features of the heuristic is that it t ends to re-elect existing clusterheads even when the network configuration c hanges. This helps to reduce the communication overheads during transition from old clusterheads to new clusterheads. Also, there is a tendency to evenly dis- tribute the mobile nodes among the clusterheads, and evently distribute the responsibility of acting as clusterheads among all nodes. Thus, the heuristic is fair and stable. Simulation experiments demonstrate that the proposed heuristic is better than the two earlier heuristics, namely the LCA (1) and Degree based (11) solutions.
A Low-Power Fast-Transient 90-nm Low-Dropout Regulator With Multiple Small-Gain Stages A power-efficient 90-nm low-dropout regulator (LDO) with multiple small-gain stages is proposed in this paper. The proposed channel-resistance-insensitive small-gain stages provide loop gain enhancements without introducing low-frequency poles before the unity-gain frequency (UGF). As a result, both the loop gain and bandwidth of the LDO are improved, so that the accuracy and response speed of voltage regulation are significantly enhanced. As no on-chip compensation capacitor is required, the active chip area of the LDO is only 72.5 μm × 37.8 μm. Experimental results show that the LDO is capable of providing an output of 0.9 V with maximum output current of 50 mA from a 1-V supply. The LDO has a quiescent current of 9.3 μA, and has significantly improvement in line and load transient responses as well as performance in power-supply rejection ratio (PSRR).
Energy-Efficient Communication Protocol for Wireless Microsensor Networks Wireless distributed micro-sensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks.Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multihop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster base stations (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic net-works, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show that LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional routing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.
TaintDroid: An Information-Flow Tracking System for Realtime Privacy Monitoring on Smartphones Today’s smartphone operating systems frequently fail to provide users with visibility into how third-party applications collect and share their private data. We address these shortcomings with TaintDroid, an efficient, system-wide dynamic taint tracking and analysis system capable of simultaneously tracking multiple sources of sensitive data. TaintDroid enables realtime analysis by leveraging Android’s virtualized execution environment. TaintDroid incurs only 32&percnt; performance overhead on a CPU-bound microbenchmark and imposes negligible overhead on interactive third-party applications. Using TaintDroid to monitor the behavior of 30 popular third-party Android applications, in our 2010 study we found 20 applications potentially misused users’ private information; so did a similar fraction of the tested applications in our 2012 study. Monitoring the flow of privacy-sensitive data with TaintDroid provides valuable input for smartphone users and security service firms seeking to identify misbehaving applications.
Synchronization via Pinning Control on General Complex Networks. This paper studies synchronization via pinning control on general complex dynamical networks, such as strongly connected networks, networks with a directed spanning tree, weakly connected networks, and directed forests. A criterion for ensuring network synchronization on strongly connected networks is given. It is found that the vertices with very small in-degrees should be pinned first. In addition, it is shown that the original condition with controllers can be reformulated such that it does not depend on the form of the chosen controllers, which implies that the vertices with very large out-degrees may be pinned. Then, a criterion for achieving synchronization on networks with a directed spanning tree, which can be composed of many strongly connected components, is derived. It is found that the strongly connected components with very few connections from other components should be controlled and the components with many connections from other components can achieve synchronization even without controls. Moreover, a simple but effective pinning algorithm for reaching synchronization on a general complex dynamical network is proposed. Finally, some simulation examples are given to verify the proposed pinning scheme.
Power saving of a dynamic width controller for a monolithic current-mode CMOS DC-DC converter We propose the dynamic power MOS width controlling technique and the adaptive gate driver voltage technique to find out the better approach to power saving in DC-DC converters. It demonstrates that the dynamic power MOS width controlling technique has much improvement in power consumption than that of the adaptive gate driver voltage technique when the load current is heavy or light. After the dynamic power MOS width modification, the simulation results show that the efficiency of current-mode DC-DC buck converter can be improved from 92% to about 98% in heavy load and from 15% to about 16.3% in light load. However, the adaptive gate driver voltage technique has only little improvement of power saving. It means that the dynamic width controller is the better approach to power saving in the DC-DC converter.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.10149
0.10298
0.025
0.01449
0.000052
0.000007
0
0
0
0
0
0
0
0
Distributed Service Deployment in Mobile Ad-Hoc Networks Today's applications are increasingly composed out of services. Standardized protocols, such as SOAP, WSDL, and UDDI are used to discover and invoke remotely located services. Nowadays, improved resource capabilities of mobile devices, e.g. PDAs, smart phones, and sensor devices allow the execution of services even on these smaller computing devices. In comparison to traditional centralized process management, a decentralized, cooperative execution of services on embedded real-time systems leads to higher system scalability, better system response time and higher data accuracy. In this paper we describe an efficient way to deploy services onto highly distributed, mobile, and unreliable devices. To achieve an efficient resource tracking we utilize different group-based data retrieval strategies. Furthermore, we present a prototype system that implements our distributed service deployment algorithm and that evaluates our approach in terms of scalability for different network topologies.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A 700-kHz bandwidth ΣΔ fractional synthesizer with spurs compensation and linearization techniques for WCDMA applications A ΣΔ fractional-N frequency synthesizer targeting WCDMA receiver specifications is presented. Through spurs compensation and linearization techniques, the PLL bandwidth is significantly extended with only a slight increase in the integrated phase noise. In a 0.18-μm standard digital CMOS technology a fully integrated prototype with 2.1-GHz output frequency and 35 Hz resolution has an area of 3.4 mm2 PADs included, and it consumes 28 mW. With a 3-dB closed-loop bandwidth of 700 kHz, the settling time is only 7 μs. The integrated phase noise plus spurs is -45 dBc for the first WCDMA channel (1 kHz to 1.94 MHz) and -65 dBc for the second channel (2.5 to 6.34 MHz) with a worst case in-band (unfiltered) fractional spur of -60 dBc. Given the extremely large bandwidth, the synthesizer could be used also for TX direct modulation over a broad band. The choice of such a large bandwidth, however, still limits the spur performance. A slightly smaller bandwidth would fulfill WCDMA requirements. This has been shown in a second prototype, using the same architecture but employing an external loop filter and VCO for greater flexibility and ease of testing.
A spur reduction architecture for multiphase fractional PLLs In this study, a multiphase fractional phase-locked loop (PLL) is presented with methods to reduce spurs. General causes of spurs are non-idealities of the phase-frequency detector (PFD) and charge pump (CP) and phase errors between the adjacent phases from the oscillator. In the architecture, the non-idealities of the PFD and CP are compensated and a phase correction circuit is added after the os...
A Hybrid Spur Compensation Technique For Finite-Modulo Fractional- Phase-Locked Loops A finite-modulo fractional-PLL utilizing a low-bit high-order Delta Sigma modulator is presented. A 4-bit fourth-order Delta Sigma modulator not only performs non-dithered 16-modulo fractional-N operation but also offers less spur generation with negligible quantization noise. Further spur reduction is achieved by charge compensation in the voltage domain and phase interpolation in the time domain, which significantly relaxes the dynamic range requirement of the charge pump compensation current. A 1.8-2.6 GHz fractional-PLL is implemented in 0.18 mu m CMOS. By employing high-order deterministic Delta Sigma modul ation and hybrid spur compensation, the spur level of less than -55 dBc is achieved when the ratio of the bandwidth to minimum frequency resolution is set to 1/4. The prototype PLL consumes 35.3 mW in which only 2.7 mW is consumed by the digital modulator and compensation circuits.
A Fully Integrated Spread-Spectrum Clock Generator by Using Direct VCO Modulation A compact architecture for a fully-integrated spread- spectrum clock generator (SSCG) using voltage-controlled oscillator direct modulation is presented in this paper. A dual-path loop filter in the phase-locked loop is employed to reduce the size of the capacitance in the filter with the aid of an extra charge pump and a unity gain amplifier. At the same time, a third-charge pump which generates ...
Folded Noise Prediction in Nonlinear Fractional-N Frequency Synthesizers The presence of nonlinearities in a fractional-N frequency synthesizer leads to the generation of an additional component of noise that appears in the output phase noise spectrum. This nonlinearity-induced noise component manifests itself as spurious tones and an elevated noise floor, also known as folded noise. This paper presents a mathematical analysis of the folded noise generated in fractiona...
Prediction of Phase Noise and Spurs in a Nonlinear Fractional- ${N}$ Frequency Synthesizer Integer boundary spurs appear in the passband of the loop response of fractional- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${N}$ </tex-math></inline-formula> phase lock loops and are, therefore, a potentially significant component of the phase noise. In spite of measures guaranteeing spur-free modulator outputs, the interaction of the modulation noise from a divider controller with inevitable loop nonlinearities produces such spurs. This paper presents analytical predictions of the locations and amplitudes of the spurs and accompanying noise floor levels produced by interaction between a divider controller output and a PLL loop with a static nonlinearity. A key finding is that the spur locations and amplitudes can be estimated by using only the knowledge of the structure and pdf of the accumulated modulator noise and the nonlinearity. These predictions also offer new insights into why the spurs appear.
A 1.1-GHz CMOS fractional-N frequency synthesizer with a 3-b third-order /spl Delta//spl Sigma/ modulator A 1.1-GHz fractional-N frequency synthesizer is implemented in 0.5-/spl mu/m CMOS employing a 3-b third-order /spl Delta//spl Sigma/ modulator. The in-band phase noise of -92 dBc/Hz at 10-kHz offset with a spur of less than -95 dBc is measured at 900.03 MHz with a phase detector frequency of 7.994 MHz and a loop bandwidth of 40 kHz. Having less than 1-Hz frequency resolution and agile switching sp...
A new approach to state observation of nonlinear systems with delayed output The article presents a new approach for the construction of a state observer for nonlinear systems when the output measurements are available for computations after a nonnegligible time delay. The proposed observer consists of a chain of observation algorithms reconstructing the system state at different delayed time instants (chain observer). Conditions are given for ensuring global exponential convergence to zero of the observation error for any given delay in the measurements. The implementation of the observer is simple and computer simulations demonstrate its effectiveness.
Distributed Subgradient Methods For Multi-Agent Optimization We study a distributed computation model for optimizing a sum of convex objective functions corresponding to multiple agents. For solving this (not necessarily smooth) optimization problem, we consider a subgradient method that is distributed among the agents. The method involves every agent minimizing his/her own objective function while exchanging information locally with other agents in the network over a time-varying topology. We provide convergence results and convergence rate estimates for the subgradient method. Our convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy.
Filtering by Aliasing In this manuscript we describe a fundamentally novel approach to the design of anti-aliasing filters. The approach, termed Filtering by Aliasing, incorporates the frequency-domain aliasing operation itself into the filtering task. The spectral content is spread with a periodic mixer and weighted with a simple analog filter before it aliases at the sampler. By designing the system according to the formulations presented in this manuscript, the sampled output will have been subjected to sharp, highly programmable anti-alias filtering. This manuscript describes the proposed Filtering by Aliasing idea, the effective programmable anti-aliasing filter, its design, and its range of frequency responses. The manuscript also addresses the implementation sensitivities of the proposed Filtering by Aliasing approach and provides a performance comparison against existing techniques in the context of reconfigurable anti-alias filtering.
The Quadrature LC Oscillator: A Complete Portrait Based on Injection Locking We show that the quadrature LC oscillator is best treated as two strongly coupled, nominally identical oscillators that are locked to the same frequency. Differential equations that extend Adler&#39;s description of locking to strong injection reveal the full dynamics of this circuit. With a simplifying insight, the analysis reveals all the modes of the oscillator, their stability, the effects of mism...
A dynamically reconfigurable system-on-a-chip architecture for future mobile digital signal processing The evolving of current and future broadband access techniques into the wireless domain introduces new and flexible network architectures with difficult and interesting challenges. The system designers are faced with a challenging set of problems that stem from access mechanisms, energy conservation, error rate, transmission speed characteristics of the wireless links and mobility aspects. This paper presents first the major challenges in realizing flexible microelectronic system solutions for digital baseband signal processing in future mobile communication applications. Based thereupon, the architecture design of flexible system-on-a-chip solutions is discussed. The focus of the paper is the introduction of a new parallel and dynamically reconfigurable hardware architecture tailored to this application area. Its performance issues and potential are discussed by the implementation of a flexible and computation-intensive component of future mobile terminals.
16.7 A 20V 8.4W 20MHz four-phase GaN DC-DC converter with fully on-chip dual-SR bootstrapped GaN FET driver achieving 4ns constant propagation delay and 1ns switching rise time Recently, the demand for miniaturized and fast transient response power delivery systems has been growing in high-voltage industrial electronics applications. Gallium Nitride (GaN) FETs showing a superior figure of merit (Rds, ON X Qg) in comparison with silicon FETs [1] can enable both high-frequency and high-efficiency operation in these applications, thus making power converters smaller, faster and more efficient. However, the lack of GaN-compatible high-speed gate drivers is a major impediment to fully take advantage of GaN FET-based power converters. Conventional high-voltage gate drivers usually exhibit propagation delay, tdelay, of up to several 10s of ns in the level shifter (LS), which becomes a critical problem as the switching frequency, fsw, reaches the 10MHz regime. Moreover, the switching slew rate (SR) of driving GaN FETs needs particular care in order to maintain efficient and reliable operation. Driving power GaN FETs with a fast SR results in large switching voltage spikes, risking breakdown of low-Vgs GaN devices, while slow SR leads to long switching rise time, tR, which degrades efficiency and limits fsw. In [2], large tdelay and long tR in the GaN FET driver limit its fsw to 1MHz. A design reported in [3] improves tR to 1.2ns, thereby enabling fsw up to 10MHz. However, the unregulated switching dead time, tDT, then becomes a major limitation to further reduction of tde!ay. This results in limited fsw and narrower range of VIN-VO conversion ratio. Interleaved multiphase topologies can be the most effective way to increase system fsw. However, each extra phase requires a capacitor for bootstrapped (BST) gate driving which incurs additional cost and complexity of the PCB design. Moreover, the requirements of fsw synchronization and balanced - urrent sharing for high fsw operation in multiphase implementation are challenging.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.05
0.05
0.05
0.05
0.05
0.016667
0.00625
0
0
0
0
0
0
0
A networking perspective on self-organizing intersection management We explore the networking aspects of realizing self-organized intersection management. Inter-Vehicle Communication (IVC) based on DSRC/WAVE is expected to complement other communication technologies including Wi-Fi and 3G/4G. Applications range from road traffic efficiency to critical safety situations, intersection management being one of the most compelling applications. The Virtual Traffic Light (VTL) system is envisioned to replace or to complement physical traffic lights in order to reduce costs and to optimize the road traffic flow in urban environments. VTL, developed at Carnegie Mellon University, outlines the general capabilities of such a system. The VTL system heavily relies on coordinating clusters of cars and the communication among them. We developed a DSRC/WAVE based communication protocol for the management of these clusters, carefully investigating all the networking aspects related to this management and the communication between the cluster leaders. Comparing VTL to physical traffic lights, we are able to show a speedup of up to 35% in realistic environments. Looking at the packet loss on the wireless link, we can show that even in high communication load scenarios, the speedup remains stable.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Optimal Scheduling of Battery Charging Station Serving Electric Vehicles Based on Battery Swapping A battery charging station (BCS) is a charging facility that supplies electric energy for recharging electric vehicles’ depleted batteries (DBs). A BCS has a certain number of charging bays and maintains a dynamic inventory of fully charged batteries (FBs). This paper studies a BCS scheduling (BCSS) problem whose target is to schedule the charging processes of the charging bays such that the charging cost is minimized while satisfying the FB demand. Specifically, the BCSS problem has two types of operations: 1) loading DBs into the charging bays and then unloading them to the FB inventory when they are fully charged and 2) controlling the charging rate of each charging bay. We formulate the BCSS problem as a mixed-integer program with quadratic battery degradation cost. A generalized benders decomposition algorithm is then developed to solve the problem efficiently. The salience of the developed algorithm is that: 1) each charging bay can solve its own subproblem separately and 2) each subproblem can be further partitioned into multiple independent and identically structured quadratic programming problems, and thus the algorithm facilitates an efficient parallel implementation. We perform extensive real data simulation to validate the optimization model and demonstrate the efficiency of the proposed algorithm.
Distributed Operation Management of Battery Swapping-Charging Systems Battery swapping-charging system (BSCS) is a promising solution to provide better battery swapping service to electric vehicles (EVs). To achieve the optimal operation of BSCSs, a closed-loop supply chain (CLSC) based BSCS model is proposed to realize the combined operation of battery charging stations (BCSs) and battery swapping stations (BSSs) while the quality of battery swapping service at BSSs is ensured with a network calculus based service model. The battery logistics within the BSCS is modeled based on a time-space network (TSN) technique. The charging and the logistics of depleted batteries and wellcharged batteries are optimally managed to maximize the revenue of the BSCS. The optimization problem is formulated as a mixedinteger linear programming (MILP) model with various TSN and operation constraints. Random-permuted alternating direction method of multipliers (RP-ADMM) is applied as a heuristic to solve the problem in a distributed way. Simulation results verify the feasibility of the proposed model and the heuristic solution with small optimality losses and less computation time.
A BCMP network approach to modeling and controlling autonomous mobility-on-demand systems AbstractIn this paper we present a queuing network approach to the problem of routing and rebalancing a fleet of self-driving vehicles providing on-demand mobility within a capacitated road network. We refer to such systems as autonomous mobility-on-demand (AMoD) systems. We first cast an AMoD system into a closed, multi-class Baskett–Chandy–Muntz–Palacios (BCMP) queuing network model capable of capturing the passenger arrival process, traffic, the state-of-charge of electric vehicles, and the availability of vehicles at the stations. Second, we propose a scalable method for the synthesis of routing and charging policies, with performance guarantees in the limit of large fleet sizes. Third, we explore the applicability of our theoretical results on a case study of Manhattan. Collectively, this paper provides a unifying framework for the analysis and control of AMoD systems, which provides a large set of modeling options (e.g. the inclusion of road capacities and charging constraints), and subsumes earlier Jackson and network flow models.
Electric Vehicle Route Selection and Charging Navigation Strategy Based on Crowd Sensing. This paper has proposed an electric vehicle (EV) route selection and charging navigation optimization model, aiming to reduce EV users&#39; travel costs and improve the load level of the distribution system concerned. Moreover, with the aid of crowd sensing, a road velocity matrix acquisition and restoration algorithm is proposed. In addition, the waiting time at charging stations is addressed based o...
Optimal Pricing of Public Electric Vehicle Charging Stations Considering Operations of Coupled Transportation and Power Systems Recognized as an efficient approach to reduce fossil fuel consumption and alleviate environment crisis, the adoption of electric vehicles (EVs) in urban transportation system is receiving more and more attention. EVs will tightly couple the operations of urban transportation network (UTN) and power distribution network (PDN), necessitating the interdependent traffic-power modeling to optimize the ...
Control of robotic mobility-on-demand systems: A queueing-theoretical perspective AbstractIn this paper we present queueing-theoretical methods for the modeling, analysis, and control of autonomous mobility-on-demand (MOD) systems wherein robotic, self-driving vehicles transport customers within an urban environment and rebalance themselves to ensure acceptable quality of service throughout the network. We first cast an autonomous MOD system within a closed Jackson network model with passenger loss. It is shown that an optimal rebalancing algorithm minimizing the number of (autonomously) rebalancing vehicles while keeping vehicle availabilities balanced throughout the network can be found by solving a linear program. The theoretical insights are used to design a robust, real-time rebalancing algorithm, which is applied to a case study of New York City and implemented on an eight-vehicle mobile robot testbed. The case study of New York shows that the current taxi demand in Manhattan can be met with about 8,000 robotic vehicles (roughly 70% of the size of the current taxi fleet operating in Manhattan). Finally, we extend our queueing-theoretical setup to include congestion effects, and study the impact of autonomously rebalancing vehicles on overall congestion. Using a simple heuristic algorithm, we show that additional congestion due to autonomous rebalancing can be effectively avoided on a road network. Collectively, this paper provides a rigorous approach to the problem of system-wide coordination of autonomously driving vehicles, and provides one of the first characterizations of the sustainability benefits of robotic transportation networks.
A Bayesian Method for the Induction of Probabilistic Networks from Data This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction of probabilistic expert systems. We extend the basic method to handle missing data and hidden (latent) variables. We show how to perform probabilistic inference by averaging over the inferences of multiple belief networks. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. Finally, we relate the methods in this paper to previous work, and we discuss open problems.
Distributed estimation and quantization An algorithm is developed for the design of a nonlinear, n-sensor, distributed estimation system subject to communication and computation constraints. The algorithm uses only bivariate probability distributions and yields locally optimal estimators that satisfy the required system constraints. It is shown that the algorithm is a generalization of the classical Lloyd-Max results
Distributed average consensus with least-mean-square deviation We consider a stochastic model for distributed average consensus, which arises in applications such as load balancing for parallel processors, distributed coordination of mobile autonomous agents, and network synchronization. In this model, each node updates its local variable with a weighted average of its neighbors' values, and each new value is corrupted by an additive noise with zero mean. The quality of consensus can be measured by the total mean-square deviation of the individual variables from their average, which converges to a steady-state value. We consider the problem of finding the (symmetric) edge weights that result in the least mean-square deviation in steady state. We show that this problem can be cast as a convex optimization problem, so the global solution can be found efficiently. We describe some computational methods for solving this problem, and compare the weights and the mean-square deviations obtained by this method and several other weight design methods.
An area-efficient multistage 3.0- to 8.5-GHz CMOS UWB LNA using tunable active inductors An area-efficient multistage 3.0- to 8.5-GHz ultra-wideband low-noise amplifier (LNA) utilizing tunable active inductors (AIs) is presented. The AI includes a negative impedance circuit (NIC) consisting of a pair of cross-coupled NMOS transistors and is tuned to vary the gain and bandwidth (BW) of the amplifier. Fabricated in a 90-nm digital CMOS process, the proposed fully on-chip LNA occupies a core chip area of only 0.022 mm2. The measurement results show a power gain S21 of 16.0 dB, a noise figure of 3.1-4.4 dB, and an input return loss S11 of less than -10.5 dB over the 3-dB BW of 3.0-8.5 GHz. Tuning the AIs allows one to increase the gain above 18.0 dB and to extend the BW over 9.4 GHz. The LNA consumes 16.0 mW from a power supply of 1.2 V.
Master Data Quality Barriers: An Empirical Investigation Purpose - The development of IT has enabled organizations to collect and store many times more data than they were able to just decades ago. This means that companies are now faced with managing huge amounts of data, which represents new challenges in ensuring high data quality. The purpose of this paper is to identify barriers to obtaining high master data quality.Design/methodology/approach - This paper defines relevant master data quality barriers and investigates their mutual importance through organizing data quality barriers identified in literature into a framework for analysis of data quality. The importance of the different classes of data quality barriers is investigated by a large questionnaire study, including answers from 787 Danish manufacturing companies.Findings - Based on a literature review, the paper identifies 12 master data quality barriers. The relevance and completeness of this classification is investigated by a large questionnaire study, which also clarifies the mutual importance of the defined barriers and the differences in importance in small, medium, and large companies.Research limitations/implications - The defined classification of data quality barriers provides a point of departure for future research by pointing to relevant areas for investigation of data quality problems. The limitations of the study are that it focuses only on manufacturing companies and master data (i.e. not transaction data).Practical implications - The classification of data quality barriers can give companies increased awareness of why they experience data quality problems. In addition, the paper suggests giving primary focus to organizational issues rather than perceiving poor data quality as an IT problem.Originality/value - Compared to extant classifications of data quality barriers, the contribution of this paper represents a more detailed and complete picture of what the barriers are in relation to data quality. Furthermore, the presented classification has been investigated by a large questionnaire study, for which reason it is founded on a more solid empirical basis than existing classifications.
Causality, influence, and computation in possibly disconnected synchronous dynamic networks In this work, we study the propagation of influence and computation in dynamic distributed computing systems that are possibly disconnected at every instant. We focus on a synchronous message-passing communication model with broadcast and bidirectional links. Our network dynamicity assumption is a worst-case dynamicity controlled by an adversary scheduler, which has received much attention recently. We replace the usual (in worst-case dynamic networks) assumption that the network is connected at every instant by minimal temporal connectivity conditions. Our conditions only require that another causal influence occurs within every time window of some given length. Based on this basic idea, we define several novel metrics for capturing the speed of information spreading in a dynamic network. We present several results that correlate these metrics. Moreover, we investigate termination criteria in networks in which an upper bound on any of these metrics is known. We exploit our termination criteria to provide efficient (and optimal in some cases) protocols that solve the fundamental counting and all-to-all token dissemination (or gossip) problems.
OpenIoT: An open service framework for the Internet of Things The Internet of Things (IoT) has been a hot topic for the future of computing and communication. It will not only have a broad impact on our everyday life in the near future, but also create a new ecosystem involving a wide array of players such as device developers, service providers, software developers, network operators, and service users. In this paper, we present an open service framework for the Internet of Things, facilitating entrance into the IoT-related mass market, and establishing a global IoT ecosystem with the worldwide use of IoT devices and softwares. We expect that the open IoT service framework we proposed will play an important role in the widespread adoption of the Internet of Things in our everyday life, enhancing our quality of life with a large number of innovative applications and services, but also offering endless opportunities to all of the stakeholders in the world of information and communication technologies.
Power Efficiency Comparison of Event-Driven and Fixed-Rate Signal Conversion and Compression for Biomedical Applications Energy-constrained biomedical recording systems need power-efficient data converters and good signal compression in order to meet the stringent power consumption requirements of many applications. In literature today, typically a SAR ADC in combination with digital compression is used. Recently, alternative event-driven sampling techniques have been proposed that incorporate compression in the ADC, such as level-crossing A/D conversion. This paper describes the power efficiency analysis of such level-crossing ADC (LCADC) and the traditional fixed-rate SAR ADC with simple compression. A model for the power consumption of the LCADC is derived, which is then compared to the power consumption of the SAR ADC with zero-order hold (ZOH) compression for multiple biosignals (ECG, EMG, EEG, and EAP). The LCADC is more power efficient than the SAR ADC up to a cross-over point in quantizer resolution (for example 8 bits for an EEG signal). This cross-over point decreases with the ratio of the maximum to average slope in the signal of the application. It also changes with the technology and design techniques used. The LCADC is thus suited for low to medium resolution applications. In addition, the event-driven operation of an LCADC results in fewer data to be transmitted in a system application. The event-driven LCADC without timer and with single-bit quantizer achieves a reduction in power consumption at system level of two orders of magnitude, an order of magnitude better than the SAR ADC with ZOH compression. At system level, the LCADC thus offers a big advantage over the SAR ADC.
1.1
0.1
0.1
0.1
0.1
0.05
0
0
0
0
0
0
0
0
Multi-sensor optimal information fusion Kalman filter This paper presents a new multi-sensor optimal information fusion criterion weighted by matrices in the linear minimum variance sense, it is equivalent to the maximum likelihood fusion criterion under the assumption of normal distribution. Based on this optimal fusion criterion, a general multi-sensor optimal information fusion decentralized Kalman filter with a two-layer fusion structure is given for discrete time linear stochastic control systems with multiple sensors and correlated noises. The first fusion layer has a netted parallel structure to determine the cross covariance between every pair of faultless sensors at each time step. The second fusion layer is the fusion center that determines the optimal fusion matrix weights and obtains the optimal fusion filter. Comparing it with the centralized filter, the result shows that the computational burden is reduced, and the precision of the fusion filter is lower than that of the centralized filter when all sensors are faultless, but the fusion filter has fault tolerance and robustness properties when some sensors are faulty. Further, the precision of the fusion filter is higher than that of each local filter. Applying it to a radar tracking system with three sensors demonstrates its effectiveness.
Distributed estimation and quantization An algorithm is developed for the design of a nonlinear, n-sensor, distributed estimation system subject to communication and computation constraints. The algorithm uses only bivariate probability distributions and yields locally optimal estimators that satisfy the required system constraints. It is shown that the algorithm is a generalization of the classical Lloyd-Max results
Distributed detection with multiple sensors I. Fundamentals In this paper, basic results on distributed detection are reviewed. In particular, we consider the parallel and the serial architectures in some detail and discuss the decision rules obtained from their optimization based on the Neyman-Pearson (NP) criterion and the Bayes formulation. For conditionally independent sensor observations, the optimality of the likelihood ratio test (LRT) at the sensors is established. General comments on several important issues are made including the computational complexity of obtaining the optimal solutions, the design of detection networks with more general topologies, and applications to different areas.
Non-Bayesian social learning. We develop a dynamic model of opinion formation in social networks when the information required for learning a parameter may not be at the disposal of any single agent. Individuals engage in communication with their neighbors in order to learn from their experiences. However, instead of incorporating the views of their neighbors in a fully Bayesian manner, agents use a simple updating rule which linearly combines their personal experience and the views of their neighbors. We show that, as long as individuals take their personal signals into account in a Bayesian way, repeated interactions lead them to successfully aggregate information and learn the true parameter. This result holds in spite of the apparent naïveté of agentsʼ updating rule, the agentsʼ need for information from sources the existence of which they may not be aware of, worst prior views, and the assumption that no agent can tell whether her own views or those of her neighbors are more accurate.
On the cover time and mixing time of random geometric graphs The cover time and mixing time of graphs has much relevance to algorithmic applications and has been extensively investigated. Recently, with the advent of ad hoc and sensor networks, an interesting class of random graphs, namely random geometric graphs, has gained new relevance and its properties have been the subject of much study. A random geometric graph G(n,r) is obtained by placing n points uniformly at random on the unit square and connecting two points iff their Euclidean distance is at most r. The phase transition behavior with respect to the radius r of such graphs has been of special interest. We show that there exists a critical radius r"o"p"t such that for any r=r"o"p"tG(n,r) has optimal cover time of @Q(nlogn) with high probability, and, importantly, r"o"p"t=@Q(r"c"o"n) where r"c"o"n denotes the critical radius guaranteeing asymptotic connectivity. Moreover, since a disconnected graph has infinite cover time, there is a phase transition and the corresponding threshold width is O(r"c"o"n). On the other hand, the radius required for rapid mixing r"r"a"p"i"d=@w(r"c"o"n), and, in particular, r"r"a"p"i"d=@Q(1/poly(logn)). We are able to draw our results by giving a tight bound on the electrical resistance and conductance of G(n,r) via certain constructed flows.
Mirror descent and nonlinear projected subgradient methods for convex optimization The mirror descent algorithm (MDA) was introduced by Nemirovsky and Yudin for solving convex optimization problems. This method exhibits an efficiency estimate that is mildly dependent in the decision variables dimension, and thus suitable for solving very large scale optimization problems. We present a new derivation and analysis of this algorithm. We show that the MDA can be viewed as a nonlinear projected-subgradient type method, derived from using a general distance-like function instead of the usual Euclidean squared distance. Within this interpretation, we derive in a simple way convergence and efficiency estimates. We then propose an Entropic mirror descent algorithm for convex minimization over the unit simplex, with a global efficiency estimate proven to be mildly dependent in the dimension of the problem.
Max-Min D-Cluster Formation in Wireless Ad Hoc Networks An ad hoc network may be logically represented as a set of clusters. The clusterheads form a -hop dominating set. Each node is at most hops from a clusterhead. Clusterheads form a virtual backbone and may be used to route packets for nodes in their cluster. Previous heuristics restricted themselves to -hop clusters. We show that the minimum -hop dominating set problem is NP-complete. Then we present a heuristic to form -clusters in a wireless ad hoc network. Nodes are assumed to have non-deterministic mobility pattern. Clusters are formed by diffusing node identities along the wireless links. When the heuristic terminates, a node either becomes a clusterhead, or is at most wireless hops away from its clusterhead. The value of is a parameter of the heuristic. The heuristic can be run either at regular intervals, or whenever the network configura- tion changes. One of the features of the heuristic is that it t ends to re-elect existing clusterheads even when the network configuration c hanges. This helps to reduce the communication overheads during transition from old clusterheads to new clusterheads. Also, there is a tendency to evenly dis- tribute the mobile nodes among the clusterheads, and evently distribute the responsibility of acting as clusterheads among all nodes. Thus, the heuristic is fair and stable. Simulation experiments demonstrate that the proposed heuristic is better than the two earlier heuristics, namely the LCA (1) and Degree based (11) solutions.
A Low-Power Fast-Transient 90-nm Low-Dropout Regulator With Multiple Small-Gain Stages A power-efficient 90-nm low-dropout regulator (LDO) with multiple small-gain stages is proposed in this paper. The proposed channel-resistance-insensitive small-gain stages provide loop gain enhancements without introducing low-frequency poles before the unity-gain frequency (UGF). As a result, both the loop gain and bandwidth of the LDO are improved, so that the accuracy and response speed of voltage regulation are significantly enhanced. As no on-chip compensation capacitor is required, the active chip area of the LDO is only 72.5 μm × 37.8 μm. Experimental results show that the LDO is capable of providing an output of 0.9 V with maximum output current of 50 mA from a 1-V supply. The LDO has a quiescent current of 9.3 μA, and has significantly improvement in line and load transient responses as well as performance in power-supply rejection ratio (PSRR).
The PARSEC benchmark suite: characterization and architectural implications This paper presents and characterizes the Princeton Application Repository for Shared-Memory Computers (PARSEC), a benchmark suite for studies of Chip-Multiprocessors (CMPs). Previous available benchmarks for multiprocessors have focused on high-performance computing applications and used a limited number of synchronization methods. PARSEC includes emerging applications in recognition, mining and synthesis (RMS) as well as systems applications which mimic large-scale multithreaded commercial programs. Our characterization shows that the benchmark suite covers a wide spectrum of working sets, locality, data sharing, synchronization and off-chip traffic. The benchmark suite has been made available to the public.
Gossip-based aggregation in large dynamic networks As computer networks increase in size, become more heterogeneous and span greater geographic distances, applications must be designed to cope with the very large scale, poor reliability, and often, with the extreme dynamism of the underlying network. Aggregation is a key functional building block for such applications: it refers to a set of functions that provide components of a distributed system access to global information including network size, average load, average uptime, location and description of hotspots, and so on. Local access to global information is often very useful, if not indispensable for building applications that are robust and adaptive. For example, in an industrial control application, some aggregate value reaching a threshold may trigger the execution of certain actions; a distributed storage system will want to know the total available free space; load-balancing protocols may benefit from knowing the target average load so as to minimize the load they transfer. We propose a gossip-based protocol for computing aggregate values over network components in a fully decentralized fashion. The class of aggregate functions we can compute is very broad and includes many useful special cases such as counting, averages, sums, products, and extremal values. The protocol is suitable for extremely large and highly dynamic systems due to its proactive structure---all nodes receive the aggregate value continuously, thus being able to track any changes in the system. The protocol is also extremely lightweight, making it suitable for many distributed applications including peer-to-peer and grid computing systems. We demonstrate the efficiency and robustness of our gossip-based protocol both theoretically and experimentally under a variety of scenarios including node and communication failures.
Fuzzy regulators and fuzzy observers: relaxed stability conditions and LMI-based designs This paper presents new relaxed stability conditions and LMI- (linear matrix inequality) based designs for both continuous and discrete fuzzy control systems. They are applied to design problems of fuzzy regulators and fuzzy observers. First, Takagi and Sugeno's fuzzy models and some stability results are recalled. To design fuzzy regulators and fuzzy observers, nonlinear systems are represented by Takagi-Sugeno's (TS) fuzzy models. The concept of parallel distributed compensation is employed to design fuzzy regulators and fuzzy observers from the TS fuzzy models. New stability conditions are obtained by relaxing the stability conditions derived in previous papers, LMI-based design procedures for fuzzy regulators and fuzzy observers are constructed using the parallel distributed compensation and the relaxed stability conditions. Other LMI's with respect to decay rate and constraints on control input and output are also derived and utilized in the design procedures. Design examples for nonlinear systems demonstrate the utility of the relaxed stability conditions and the LMI-based design procedures
Digital signal processors in cellular radio communications Contemporary wireless communications are based on digital communications technologies. The recent commercial success of mobile cellular communications has been enabled in part by successful designs of digital signal processors with appropriate on-chip memories and specialized accelerators for digital transceiver operations. This article provides an overview of fixed point digital signal processors and ways in which they are used in cellular communications. Directions for future wireless-focused DSP technology developments are discussed
Kinesis: a security incident response and prevention system for wireless sensor networks This paper presents Kinesis, a security incident response and prevention system for wireless sensor networks, designed to keep the network functional despite anomalies or attacks and to recover from attacks without significant interruption. Due to the deployment of sensor networks in various critical infrastructures, the applications often impose stringent requirements on data reliability and service availability. Given the failure- and attack-prone nature of sensor networks, it is a pressing concern to enable the sensor networks provide continuous and unobtrusive services. Kinesis is quick and effective in response to incidents, distributed in nature, and dynamic in selecting response actions based on the context. It is lightweight in terms of response policy specification, and communication and energy overhead. A per-node single timer based distributed strategy to select the most effective response executor in a neighborhood makes the system simple and scalable, while achieving proper load distribution and redundant action optimization. We implement Kinesis in TinyOS and measure its performance for various application and network layer incidents. Extensive TOSSIM simulations and testbed experiments show that Kinesis successfully counteracts anomalies/attacks and behaves consistently under various attack scenarios and rates.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.1
0.1
0.1
0.1
0.1
0.02
0
0
0
0
0
0
0
0
Digital Multiplier-Less Spiking Neural Network Architecture of Reinforcement Learning in a Context-Dependent Task Neuromorphic engineers develop event-based spiking neural networks (SNNs) in hardware. These SNNs closer resemble the dynamics of biological neurons than conventional artificial neural networks and achieve higher efficiency thanks to the event-based, asynchronous nature of the processing. Learning in the hardware SNNs is a more challenging task, however. The conventional supervised learning methods cannot be directly applied to SNNs due to the non-differentiable event-based nature of their activation. For this reason, learning in SNNs is currently an active research topic. Reinforcement learning (RL) is a particularly promising learning method for neuromorphic implementation, especially in the field of autonomous agents' control. An SNN realization of a bio-inspired RL model is in the focus of this work. In particular, in this article, we propose a new digital multiplier-less hardware implementation of an SNN with RL capability. We show how the proposed network can learn stimulus-response associations in a context-dependent task. The task is inspired by biological experiments that study RL in animals. The architecture is described using the standard digital design flow and uses power- and space-efficient cores. The proposed hardware SNN model is compared both to data from animal experiments and to a computational model. We perform a comparison to the behavioral experiments using a robot, to show the learning capability in hardware in a closed sensory-motor loop.
From few to many: illumination cone models for face recognition under variable lighting and pose We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render驴or synthesize驴images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone (based on Euclidean distance within the image space). We test our face recognition method on 4,050 images from the Yale Face Database B; these images contain 405 viewing conditions (9 poses $\times$ 45 illumination conditions) for 10 individuals. The method performs almost without error, except on the most extreme lighting directions, and significantly outperforms popular recognition methods that do not use a generative model.
Energy efficient parallel neuromorphic architectures with approximate arithmetic on FPGA. In this paper, we present the parallel neuromorphic processor architectures for spiking neural networks on FPGA. The proposed architectures address several critical issues pertaining to efficient parallelization of the update of membrane potentials, on-chip storage of synaptic weights and integration of approximate arithmetic units. The trade-offs between throughput, hardware cost and power overheads for different configurations are thoroughly investigated. Notably, for the application of handwritten digit recognition, a promising training speedup of 13.5x and a recognition speedup of 25.8x are achieved by a parallel implementation whose degree of parallelism is 32. In spite of the 120MHz operating frequency, the 32-way parallel hardware design demonstrates a 59.4x training speedup over the single-thread software program running on a 2.2GHz general purpose CPU. Equally importantly, by leveraging the built-in resilience of the neuromorphic architecture we demonstrate the energy benefit resulted from the use of approximate arithmetic computation. Up to 20% improvement in energy consumption is achieved by integrating approximate multipliers into the system while maintaining almost the same level of recognition rate achieved using standard multipliers. To the best of our knowledge, it is the first time that the approximate computing and parallel processing are applied to FPGA based spiking neural networks. The influence of the parallel processing on the benefits of approximate computing is also discussed in detail.
Scalable Digital Neuromorphic Architecture for Large-Scale Biophysically Meaningful Neural Network With Multi-Compartment Neurons. Multicompartment emulation is an essential step to enhance the biological realism of neuromorphic systems and to further understand the computational power of neurons. In this paper, we present a hardware efficient, scalable, and real-time computing strategy for the implementation of large-scale biologically meaningful neural networks with one million multi-compartment neurons (CMNs). The hardware platform uses four Altera Stratix III field-programmable gate arrays, and both the cellular and the network levels are considered, which provides an efficient implementation of a large-scale spiking neural network with biophysically plausible dynamics. At the cellular level, a cost-efficient multi-CMN model is presented, which can reproduce the detailed neuronal dynamics with representative neuronal morphology. A set of efficient neuromorphic techniques for single-CMN implementation are presented with all the hardware cost of memory and multiplier resources removed and with hardware performance of computational speed enhanced by 56.59% in comparison with the classical digital implementation method. At the network level, a scalable network-on-chip (NoC) architecture is proposed with a novel routing algorithm to enhance the NoC performance including throughput and computational latency, leading to higher computational efficiency and capability in comparison with state-of-the-art projects. The experimental results demonstrate that the proposed work can provide an efficient model and architecture for large-scale biologically meaningful networks, while the hardware synthesis results demonstrate low area utilization and high computational speed that supports the scalability of the approach.
Spike Counts based Low Complexity SNN Architecture with Binary Synapse. In this paper, we present an energy and area efficient spike neural network (SNN) processor based on novel spike counts based methods. For the low cost SNN design, we propose hardware-friendly complexity reduction techniques for both of learning and inferencing modes of operations. First, for the unsupervised learning process, we propose a spike counts based learning method. The novel learning app...
Application of Deep Compression Technique in Spiking Neural Network Chip. In this paper, a reconfigurable and scalable spiking neural network processor, containing 192 neurons and 6144 synapses, is developed. By using deep compression technique in spiking neural network chip, the amount of physical synapses can be reduced to 1/16 of that needed in the original network, while the accuracy is maintained. This compression technique can greatly reduce the number of SRAMs inside the chip as well as the power consumption of the chip. This design achieves throughput per unit area of 1.1 GSOP/( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\text{s}\!\cdot\!\text{mm}^2$</tex-math></inline-formula> ) at 1.2 V, and energy consumed per SOP of 35 pJ. A 2-layer fully-connected spiking neural network is mapped to the chip, and thus the chip is able to realize handwritten digit recognition on MNIST with an accuracy of 91.2%.
Impossibility of distributed consensus with one faulty process The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the “Byzantine Generals” problem.
A fast and elitist multiobjective genetic algorithm: NSGA-II Multi-objective evolutionary algorithms (MOEAs) that use non-dominated sorting and sharing have been criticized mainly for: (1) their O(MN3) computational complexity (where M is the number of objectives and N is the population size); (2) their non-elitism approach; and (3) the need to specify a sharing parameter. In this paper, we suggest a non-dominated sorting-based MOEA, called NSGA-II (Non-dominated Sorting Genetic Algorithm II), which alleviates all of the above three difficulties. Specifically, a fast non-dominated sorting approach with O(MN2) computational complexity is presented. Also, a selection operator is presented that creates a mating pool by combining the parent and offspring populations and selecting the best N solutions (with respect to fitness and spread). Simulation results on difficult test problems show that NSGA-II is able, for most problems, to find a much better spread of solutions and better convergence near the true Pareto-optimal front compared to the Pareto-archived evolution strategy and the strength-Pareto evolutionary algorithm - two other elitist MOEAs that pay special attention to creating a diverse Pareto-optimal front. Moreover, we modify the definition of dominance in order to solve constrained multi-objective problems efficiently. Simulation results of the constrained NSGA-II on a number of test problems, including a five-objective, seven-constraint nonlinear problem, are compared with another constrained multi-objective optimizer, and the much better performance of NSGA-II is observed
Language-based information-flow security Current standard security practices do not provide substantial assurance that the end-to-end behavior of a computing system satisfies important security policies such as confidentiality. An end-to-end confidentiality policy might assert that secret input data cannot be inferred by an attacker through the attacker's observations of system output; this policy regulates information flow. Conventional security mechanisms such as access control and encryption do not directly address the enforcement of information-flow policies. Previously, a promising new approach has been developed: the use of programming-language techniques for specifying and enforcing information-flow policies. In this paper, we survey the past three decades of research on information-flow security, particularly focusing on work that uses static program analysis to enforce information-flow policies. We give a structured view of work in the area and identify some important open challenges.
Fundamental control algorithms in mobile networks In this work we propose simple and efficient protocols forcounting and leader election in mobile networks. For mobilenetworks with fixed base stations we provide a new andvery efficient protocol for counting the number of mobilehosts. The main part of the work concentrates on ad-hocnetworks (no fixed subnetwork). We provide a model forthese networks and leader election (and a special form ofcounting) protocols for both named and anonymous mobilehosts. In this work we define two protocol ...
Bundled execution of recurring traces for energy-efficient general purpose processing Technology scaling has delivered on its promises of increasing device density on a single chip. However, the voltage scaling trend has failed to keep up, introducing tight power constraints on manufactured parts. In such a scenario, there is a need to incorporate energy-efficient processing resources that can enable more computation within the same power budget. Energy efficiency solutions in the past have typically relied on application specific hardware and accelerators. Unfortunately, these approaches do not extend to general purpose applications due to their irregular and diverse code base. Towards this end, we propose BERET, an energy-efficient co-processor that can be configured to benefit a wide range of applications. Our approach identifies recurring instruction sequences as phases of "temporal regularity" in a program's execution, and maps suitable ones to the BERET hardware, a three-stage pipeline with a bundled execution model. This judicious off-loading of program execution to a reduced-complexity hardware demonstrates significant savings on instruction fetch, decode and register file accesses energy. On average, BERET reduces energy consumption by a factor of 3-4X for the program regions selected across a range of general-purpose and media applications. The average energy savings for the entire application run was 35% over a single-issue in-order processor.
A dynamically reconfigurable system-on-a-chip architecture for future mobile digital signal processing The evolving of current and future broadband access techniques into the wireless domain introduces new and flexible network architectures with difficult and interesting challenges. The system designers are faced with a challenging set of problems that stem from access mechanisms, energy conservation, error rate, transmission speed characteristics of the wireless links and mobility aspects. This paper presents first the major challenges in realizing flexible microelectronic system solutions for digital baseband signal processing in future mobile communication applications. Based thereupon, the architecture design of flexible system-on-a-chip solutions is discussed. The focus of the paper is the introduction of a new parallel and dynamically reconfigurable hardware architecture tailored to this application area. Its performance issues and potential are discussed by the implementation of a flexible and computation-intensive component of future mobile terminals.
NDC: Analyzing the impact of 3D-stacked memory+logic devices on MapReduce workloads While Processing-in-Memory has been investigated for decades, it has not been embraced commercially. A number of emerging technologies have renewed interest in this topic. In particular, the emergence of 3D stacking and the imminent release of Micron's Hybrid Memory Cube device have made it more practical to move computation near memory. However, the literature is missing a detailed analysis of a killer application that can leverage a Near Data Computing (NDC) architecture. This paper focuses on in-memory MapReduce workloads that are commercially important and are especially suitable for NDC because of their embarrassing parallelism and largely localized memory accesses. The NDC architecture incorporates several simple processing cores on a separate, non-memory die in a 3D-stacked memory package; these cores can perform Map operations with efficient memory access and without hitting the bandwidth wall. This paper describes and evaluates a number of key elements necessary in realizing efficient NDC operation: (i) low-EPI cores, (ii) long daisy chains of memory devices, (iii) the dynamic activation of cores and SerDes links. Compared to a baseline that is heavily optimized for MapReduce execution, the NDC design yields up to 15X reduction in execution time and 18X reduction in system energy.
A 1V 3.5 μW Bio-AFE With Chopper-Capacitor-Chopper Integrator-Based DSL and Low Power GM-C Filter This brief presents a low-noise, low-power bio-signal acquisition analog front-end (Bio-AFE). It mainly includes a capacitively coupled chopper-stabilized instrumentation amplifier (CCIA), a programmable gain amplifier (PGA), a low-pass filter (LPF), and a successive approximation analog to digital converter (SAR ADC). A chopper-capacitor-chopper integrator based DC servo loop (C3IB-DSL...
1.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
A 45 nm 8-Core Enterprise Xeon¯ Processor This paper describes a 2.3 Billion transistors, 8-core, 16-thread, 64-bit Xeon® EX processor with a 24 MB shared L3 cache implemented in a 45 nm nine-metal process. Multiple clock and voltage domains are used to reduce power consumption. Long channel devices and cache sleep mode are used to minimize leakage. Core and cache recovery improve manufacturing yields and enable multiple product flavors f...
Thermal monitoring mechanisms for chip multiprocessors With large-scale integration and increasing power densities, thermal management has become an important tool to maintain performance and reliability in modern process technologies. In the core of dynamic thermal management schemes lies accurate reading of on-die temperatures. Therefore, careful planning and embedding of thermal monitoring mechanisms into high-performance systems becomes crucial. In this paper, we propose three techniques to create sensor infrastructures for monitoring the maximum temperature on a multicore system. Initially, we extend a nonuniform sensor placement methodology proposed in the literature to handle chip multiprocessors (CMPs) and show its limitations. We then analyze a grid-based approach where the sensors are placed on a static grid covering each core and show that the sensor readings can differ from the actual maximum core temperature by as much as 12.6°C when using 16 sensors per core. Also, as large as 10.6&percnt; of the thermal emergencies are not captured using the same number of sensors. Based on this observation, we first develop an interpolation scheme, which estimates the maximum core temperature through interpolation of the readings collected at the static grid points. We show that the interpolation scheme improves the measurement accuracy and emergency coverage compared to grid-based placement when using the same number of sensors. Second, we present a dynamic scheme where only a subset of the sensor readings is collected to predict the maximum temperature of each core. Our results indicate that, we can reduce the number of active sensors by as much as 50&percnt;, while maintaining similar measurement accuracy and emergency coverage compared to the case where the entire sensor set on the grid is sampled at all times.
On Fine-Grained Runtime Power Budgeting for Networks-on-Chip Systems. Power budgeting is an essential aspect of networks-on-chip (NoC) to meet the power constraint for on-chip communications while assuring the best possible overall system performance. For simplicity and ease of implementation, existing NoC power budgeting schemes treat all the individual routers uniformly when allocating power to them. However, such homogeneous power budgeting schemes ignore the fac...
TSA-NoC: Learning-Based <underline>T</underline>hreat Detection and Mitigation for <underline>S</underline>ecure Network-on-Chip <underline>A</underline>rchitecture Networks-on-chip (NoCs) are playing a critical role in modern multicore architecture, and NoC security has become a major concern. Maliciously implanted hardware Trojans (HTs) inject faults into on-chip communications that saturate the network, resulting in the leakage of sensitive data via side channels and significant performance degradation. While existing techniques protect NoCs by detecting and isolating HT-infected components, they inevitably incur occasional inaccurate detection with considerable network latency and power overheads. We propose TSA-NoC, a learning-based design framework for secure and efficient on-chip communication. The proposed TSA-NoC uses an artificial neural network for runtime HT-detection with higher accuracy. Furthermore, we propose a deep-reinforcement-learning-based adaptive routing design for HT mitigation with the aim of minimizing network latency and maximizing energy efficiency. Simulation results show that TSA-NoC achieves up to 97% HT-detection accuracy, 70% improved energy efficiency, and 29% reduced network latency as compared to state-of-the-art HT-mitigation techniques.
An Evaluation of High-Level Mechanistic Core Models Large core counts and complex cache hierarchies are increasing the burden placed on commonly used simulation and modeling techniques. Although analytical models provide fast results, they do not apply to complex, many-core shared-memory systems. In contrast, detailed cycle-level simulation can be accurate but also tends to be slow, which limits the number of configurations that can be evaluated. A middle ground is needed that provides for fast simulation of complex many-core processors while still providing accurate results. In this article, we explore, analyze, and compare the accuracy and simulation speed of high-abstraction core models as a potential solution to slow cycle-level simulation. We describe a number of enhancements to interval simulation to improve its accuracy while maintaining simulation speed. In addition, we introduce the instruction-window centric (IW-centric) core model, a new mechanistic core model that bridges the gap between interval simulation and cycle-accurate simulation by enabling high-speed simulations with higher levels of detail. We also show that using accurate core models like these are important for memory subsystem studies, and that simple, naive models, like a one-IPC core model, can lead to misleading and incorrect results and conclusions in practical design studies. Validation against real hardware shows good accuracy, with an average single-core error of 11.1&percnt; and a maximum of 18.8&percnt; for the IW-centric model with a 1.5× slowdown compared to interval simulation.
An exploration of L2 cache covert channels in virtualized environments Recent exploration into the unique security challenges of cloud computing have shown that when virtual machines belonging to different customers share the same physical machine, new forms of cross-VM covert channel communication arise. In this paper, we explore one of these threats, L2 cache covert channels, and demonstrate the limits of these this threat by providing a quantification of the channel bit rates and an assessment of its ability to do harm. Through progressively refining models of cross-VM covert channels from the derived maximums, to implementable channels in the lab, and finally in Amazon EC2 itself we show how a variety of factors impact our ability to create effective channels. While we demonstrate a covert channel with considerably higher bit rate than previously reported, we assess that even at such improved rates, the harm of data exfiltration from these channels is still limited to the sharing of small, if important, secrets such as private keys.
Max-Min D-Cluster Formation in Wireless Ad Hoc Networks An ad hoc network may be logically represented as a set of clusters. The clusterheads form a -hop dominating set. Each node is at most hops from a clusterhead. Clusterheads form a virtual backbone and may be used to route packets for nodes in their cluster. Previous heuristics restricted themselves to -hop clusters. We show that the minimum -hop dominating set problem is NP-complete. Then we present a heuristic to form -clusters in a wireless ad hoc network. Nodes are assumed to have non-deterministic mobility pattern. Clusters are formed by diffusing node identities along the wireless links. When the heuristic terminates, a node either becomes a clusterhead, or is at most wireless hops away from its clusterhead. The value of is a parameter of the heuristic. The heuristic can be run either at regular intervals, or whenever the network configura- tion changes. One of the features of the heuristic is that it t ends to re-elect existing clusterheads even when the network configuration c hanges. This helps to reduce the communication overheads during transition from old clusterheads to new clusterheads. Also, there is a tendency to evenly dis- tribute the mobile nodes among the clusterheads, and evently distribute the responsibility of acting as clusterheads among all nodes. Thus, the heuristic is fair and stable. Simulation experiments demonstrate that the proposed heuristic is better than the two earlier heuristics, namely the LCA (1) and Degree based (11) solutions.
A Low-Power Fast-Transient 90-nm Low-Dropout Regulator With Multiple Small-Gain Stages A power-efficient 90-nm low-dropout regulator (LDO) with multiple small-gain stages is proposed in this paper. The proposed channel-resistance-insensitive small-gain stages provide loop gain enhancements without introducing low-frequency poles before the unity-gain frequency (UGF). As a result, both the loop gain and bandwidth of the LDO are improved, so that the accuracy and response speed of voltage regulation are significantly enhanced. As no on-chip compensation capacitor is required, the active chip area of the LDO is only 72.5 μm × 37.8 μm. Experimental results show that the LDO is capable of providing an output of 0.9 V with maximum output current of 50 mA from a 1-V supply. The LDO has a quiescent current of 9.3 μA, and has significantly improvement in line and load transient responses as well as performance in power-supply rejection ratio (PSRR).
Energy-Efficient Communication Protocol for Wireless Microsensor Networks Wireless distributed micro-sensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks.Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multihop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster base stations (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic net-works, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show that LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional routing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.
Side-Channel Leaks in Web Applications: A Reality Today, a Challenge Tomorrow With software-as-a-service becoming mainstream, more and more applications are delivered to the client through the Web. Unlike a desktop application, a web application is split into browser-side and server-side components. A subset of the application’s internal information flows are inevitably exposed on the network. We show that despite encryption, such a side-channel information leak is a realistic and serious threat to user privacy. Specifically, we found that surprisingly detailed sensitive information is being leaked out from a number of high-profile, top-of-the-line web applications in healthcare, taxation, investment and web search: an eavesdropper can infer the illnesses/medications/surgeries of the user, her family income and investment secrets, despite HTTPS protection; a stranger on the street can glean enterprise employees' web search queries, despite WPA/WPA2 Wi-Fi encryption. More importantly, the root causes of the problem are some fundamental characteristics of web applications: stateful communication, low entropy input for better interaction, and significant traffic distinctions. As a result, the scope of the problem seems industry-wide. We further present a concrete analysis to demonstrate the challenges of mitigating such a threat, which points to the necessity of a disciplined engineering practice for side-channel mitigations in future web application developments.
Wideband Balun-LNA With Simultaneous Output Balancing, Noise-Canceling and Distortion-Canceling An inductorless low-noise amplifier (LNA) with active balun is proposed for multi-standard radio applications between 100 MHz and 6 GHz. It exploits a combination of a common-gate (CGH) stage and an admittance-scaled common-source (CS) stage with replica biasing to maximize balanced operation, while simultaneously canceling the noise and distortion of the CG-stage. In this way, a noise figure (NF) close to or below 3 dB can be achieved, while good linearity is possible when the CS-stage is carefully optimized. We show that a CS-stage with deep submicron transistors can have high IIP2, because the nugsldr nuds cross-term in a two-dimensional Taylor approximation of the IDS(VGS, VDS) characteristic can cancel the traditionally dominant square-law term in the IDS(VGS) relation at practical gain values. Using standard 65 nm transistors at 1.2 V supply voltage, we realize a balun-LNA with 15 dB gain, NF < 3.5 dB and IIP2 > +20 dBm, while simultaneously achieving an IIP3 > 0 dBm. The best performance of the balun is achieved between 300 MHz to 3.5 GHz with gain and phase errors below 0.3 dB and plusmn2 degrees. The total power consumption is 21 mW, while the active area is only 0.01 mm2.
Cross-layer sensors for green cognitive radio. Green cognitive radio is a cognitive radio (CR) that is aware of sustainable development issues and deals with an additional constraint as regards the decision-making function of the cognitive cycle. In this paper, it is explained how the sensors distributed throughout the different layers of our CR model could help on taking the best decision in order to best contribute to sustainable development.
20.3 A feedforward controlled on-chip switched-capacitor voltage regulator delivering 10W in 32nm SOI CMOS On-chip (or fully integrated) switched-capacitor (SC) voltage regulators (SCVR) have recently received a lot of attention due to their ease of monolithic integration. The use of deep trench capacitors can lead to SCVR implementations that simultaneously achieve high efficiency, high power density, and fast response time. For the application of granular power distribution of many-core microprocessor systems, the on-chip SCVR must maintain an output voltage above a certain minimum level Uout, min in order for the microprocessor core to meet setup time requirements. Following a transient load change, the output voltage typically exhibits a droop due to parasitic inductances and resistances in the power distribution network. Therefore, the steady-state output voltage is kept high enough to ensure VOUT >Vout, min at all times, thereby introducing an output voltage overhead that leads to increased system power consumption. The output voltage droop can be reduced by implementing fast regulation and a sufficient amount of on-chip decoupling capacitance. However, a large amount of on-chip decoupling capacitance is needed to significantly reduce the droop, and it becomes impractical to implement owing to the large chip area overhead required. This paper presents a feedforward control scheme that significantly reduces the output voltage droop in the presence of a large input voltage droop following a transient event. This in turn reduces the required output voltage overhead and may lead to significant overall system power savings.
Power Efficiency Comparison of Event-Driven and Fixed-Rate Signal Conversion and Compression for Biomedical Applications Energy-constrained biomedical recording systems need power-efficient data converters and good signal compression in order to meet the stringent power consumption requirements of many applications. In literature today, typically a SAR ADC in combination with digital compression is used. Recently, alternative event-driven sampling techniques have been proposed that incorporate compression in the ADC, such as level-crossing A/D conversion. This paper describes the power efficiency analysis of such level-crossing ADC (LCADC) and the traditional fixed-rate SAR ADC with simple compression. A model for the power consumption of the LCADC is derived, which is then compared to the power consumption of the SAR ADC with zero-order hold (ZOH) compression for multiple biosignals (ECG, EMG, EEG, and EAP). The LCADC is more power efficient than the SAR ADC up to a cross-over point in quantizer resolution (for example 8 bits for an EEG signal). This cross-over point decreases with the ratio of the maximum to average slope in the signal of the application. It also changes with the technology and design techniques used. The LCADC is thus suited for low to medium resolution applications. In addition, the event-driven operation of an LCADC results in fewer data to be transmitted in a system application. The event-driven LCADC without timer and with single-bit quantizer achieves a reduction in power consumption at system level of two orders of magnitude, an order of magnitude better than the SAR ADC with ZOH compression. At system level, the LCADC thus offers a big advantage over the SAR ADC.
1.2
0.2
0.2
0.2
0.1
0.015385
0
0
0
0
0
0
0
0
Neural sliding-mode pinning control for output synchronization for uncertain general complex networks A novel approach for output synchronization for uncertain general complex networks with non-identical nodes is proposed. This goal can be solved applying to a small fraction of network nodes (pinned ones) a neural controller; this controller is composed of an on-line identifier based on a recurrent high-order neural network, and a sliding-mode controller. An illustrative example is included, which is composed of a network of ten nodes, with different self-dynamics, illustrating the effectiveness and good performance of the proposed control scheme.
Finite-Time and Fixed-Time Cluster Synchronization With or Without Pinning Control. In this paper, the finite-time and fixed-time cluster synchronization problem for complex networks with or without pinning control are discussed. Finite-time (or fixed-time) synchronization has been a hot topic in recent years, which means that the network can achieve synchronization in finite-time, and the settling time depends on the initial values for finite-time synchronization (or the settlin...
The Emergence of Intelligent Enterprises: From CPS to CPSS When IEEE Intelligent Systems solicited ideas for a new department, cyberphysical systems(CPS) received overwhelming support.Cyber-Physical-Social Systems is the new name for CPS. CPSS is the enabling platform technology that will lead us to an era of intelligent enterprises and industries. Internet use and cyberspace activities have created an overwhelming demand for the rapid development and application of CPSS. CPSS must be conducted with a multidisciplinary approach involving the physical, social, and cognitive sciences and that Al-based intelligent systems will be key to any successful construction and deployment.
Passivity-based stabilization and passive synchronization of complex nonlinear networks. This paper investigates the passive stabilization and passive synchronization problems for complex nonlinear networks. Two classes of complex networks are considered with or without time delays (including the state time-varying delays, feedback time-varying delays and interconnection time-varying delays). The purpose of passive stabilization and passive synchronization is to design a state feedback controller such that the nonlinear dynamical network and the corresponding error dynamical network are passive and stable asymptotically. Delay-independent sufficient conditions for complex nonlinear networks without delays and delay-dependent sufficient conditions for complex nonlinear networks with time-varying delays are provided, respectively, to solve the passive stabilization and passive synchronization problems for the given networks. Two numerical examples are employed to illustrate the effectiveness of the proposed results.
Passivity of coupled memristive delayed neural networks with fixed and adaptive coupling weights. In this paper, we are concerned with two coupled memristive delayed neural networks with different dimensions of input and output. The essential difference between them is whether coupling delay is incorporated in the network model. First, we respectively analyze the passivity of the presented network models, and some passivity conditions are deduced under the help of some inequality techniques. Then, considering that the networks cannot achieve passivity by themselves in some cases, an edge-based adaptive strategy is developed for guaranteeing the passivity, input-strict passivity and output-strict passivity of the raised networks. At the end, the correctness of the acquired criteria is verified by two illustrative examples.
PID Control for Synchronization of Complex Dynamical Networks With Directed Topologies Over the past decades, the synchronization of complex networks with directed topologies has received considerable attention owing to its extensive applications in the realistic world. Design of proportional-integral-derivative (PID) control protocols for achieving synchronization with directed networks is known to be a challenging task. The purpose of this paper is to establish a connection between the PID control protocols and synchronization of complex dynamical networks with directed topologies. Based on the classical complex network model, we investigate global synchronization with PD controller of a balanced strongly connected directed network and global synchronization with PI controller of a strongly connected directed network, and a directed network containing a spanning tree, respectively. Several sets of sufficient conditions are established under which the network reaches global synchronization. The simulation examples are presented to verify the efficiency of the theoretical results.
Edge-Based Decentralized Adaptive Pinning Synchronization of Complex Networks Under Link Attacks This article studies the pinning synchronization problem with edge-based decentralized adaptive schemes under link attacks. The link attacks considered here are a class of malicious attacks to break links between neighboring nodes in complex networks. In such an insecure network environment, two kinds of edge-based decentralized adaptive update strategies (synchronous and asynchronous) on coupling strengths and gains are designed to realize the security synchronization of complex networks. Moreover, by virtue of the edge pinning technique, the corresponding secure synchronization problem is considered under the case where only a small fraction of coupling strengths and gains is updated. These designed adaptive strategies do not require any global information, and therefore, the obtained results in this article are developed in a fully decentralized framework. Finally, a numerical example is provided to verify the availability of the achieved theoretical outcomes.
Exponential Synchronization for Delayed Dynamical Networks via Intermittent Control: Dealing With Actuator Saturations. Over the past two decades, the synchronization problem for dynamical networks has drawn significant attention due to its clear practical insight in biological systems, social networks, and neuroscience. In the case where a dynamical network cannot achieve the synchronization by itself, the feedback controller should be added to drive the network toward a desired orbit. On the other hand, the time delays may often occur in the nodes or the couplings of a dynamical network, and the existence of time delays may induce some undesirable dynamics or even instability. Moreover, in the course of implementing a feedback controller, the inevitable actuator limitations could downgrade the system performance and, in the worst case, destabilize the closed-loop dynamics. The main purpose of this paper is to consider the synchronization problem for a class of delayed dynamical networks with actuator saturations. Each node of the dynamical network is described by a nonlinear system with a time-varying delay and the intermittent control strategy is proposed. By using a combination of novel sector conditions, piecewise Lyapunov-like functionals and the switched system approach, delay-dependent sufficient conditions are first obtained under which the dynamical network is locally exponentially synchronized. Then, the explicit characterization of the controller gains is established by means of the feasibility of certain matrix inequalities. Furthermore, optimization problems are formulated in order to acquire a larger estimate of the set of initial conditions for the evolution of the error dynamics when designing the intermittent controller. Finally, two examples are given to show the benefits and effectiveness of the developed theoretical results.
Input-to-state stability for discrete-time nonlinear systems The input-to-state stability property and iss small-gain theorems are introduced as the cornerstone of new stability criteria for discrete-time nonlinear systems.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
A Low-Power Cmos Lna For Ultra-Wideband Wireless Receivers In this paper, a low-power ultra-wideband (UWB) low-noise amplifier (LNA) is proposed. Here, we propose a structure to combine the common gate with band pass filters, which can reduce parasitic capacitance of the transistor and to achieve input wideband matching. The pi-section LC network technique is employed in the LNA to achieve sufficient. at gain. A bias resistor of large value is placed between the source and the body nodes to prevent body effect and reduce noise. Numerical simulation based on TSMC 0.18 mu m 1P6M process. It achieved 10.0 similar to 12.4 dB gain from 3 GHz to 10.6 GHz and 3.25 dB noise figure in 8.5 GHz, operates from 1.5 V power supply, and dissipates 3 mW without the output buffer.
Interval observers for linear time-invariant systems with disturbances It is shown that, for any time-invariant exponentially stable linear system with additive disturbances, time-varying exponentially stable interval observers can be constructed. The technique of construction relies on the Jordan canonical form that any real matrix admits and on time-varying changes of coordinates for elementary Jordan blocks which lead to cooperative linear systems. The approach is applied to detectable linear systems.
Estimating stable delay intervals with a discretized Lyapunov-Krasovskii functional formulation. In general, a system with time delay may have multiple stable delay intervals. Especially, a stable delay interval does not always contain zero. Asymptotically accurate stability conditions such as discretized Lyapunov–Krasovskii functional (DLF) method and sum-of-square (SOS) method are especially effective for such systems. In this article, a DLF-based method is proposed to estimate the maximal stable delay interval accurately without using bisection when one point in this interval is given. The method is formulated as a generalized eigenvalue problem (GEVP) of linear matrix inequalities (LMIs), and an accurate estimate may be reached by iteration either in a finite number of steps or asymptotically. The coupled differential–difference equation formulation is used to illustrate the method. However, the idea can be easily adapted to the traditional differential–difference equation setting.
A 12-Bit Dynamic Tracking Algorithm-Based SAR ADC With Real-Time QRS Detection A 12-bit successive approximation register (SAR) ADC based on dynamic tracking algorithm and a real-time QRS-detection algorithm are proposed. The dynamic tracking algorithm features two tracking windows which are adjacent to prediction interval. This algorithm is able to track down the input signal's variation range and automatically adjust the subrange interval and update prediction code. QRS-complex detection algorithm integrates synchronous time sequential ADC and realtime QRS-detector. The chip is fabricated in a standard 0.13 μm CMOS process with a 0.6 V supply. Measurement results show that proposed ADC exhibits 10.72 effective number of bit (ENOB) and 79.63 dB spur-free dynamic range (SFDR) at 10k Hz sample rate given 41.5 Hz sinusoid input. The DNL and INL are bounded at -0.6/0.62 LSB and -0.67/1.43 LSBs. The ADC achieves FoM of 48 fJ/conversion-step at the best case. Also, the prototype is experimented with ECG signal input and extracts the heart beat signal successfully.
1.084444
0.08
0.066667
0.066667
0.066667
0.066667
0.066667
0.04
0.000889
0
0
0
0
0
Analysis and Design of Interval Type-2 Polynomial-Fuzzy-Model-Based Networked Tracking Control Systems Highly nonlinear systems exist in many real-world applications. The control design of such systems is challenging even for existing advanced control theory. Besides, when there is uncertainty in the nonlinear system and networked control strategy is considered, the problem will become even more complicated. To confront the problem, for the first time, this article provides a solution for the tracking control design of the nonlinear networked control systems (NCSs) subject to uncertainty under the polynomial event-triggered mechanism (PETM). In the proposed tracking control scheme, the nonlinearity in the NCSs is effectively represented by a polynomial fuzzy model while the uncertainty is handled by the interval type-2 (IT2) fuzzy sets. The tracking control objective is to properly design the event-triggered IT2 polynomial controller, which drives the states of the plant to track those of the stable reference system under the PETM. Different from the existing works in the literature, the event-triggering condition can be varied according to the state to improve the flexibility and capacity of event-triggered mechanism. Furthermore, in some works reported in the literature, the perfect match of premise variables is assumed. However, this assumption is extremely difficult to be fulfilled in the event-triggered control applications. To address the long-standing challenge of an intrinsic mismatch issue of the premise variables due to event-triggering mechanism, in the proposed analysis, the imperfect premise matching (IPM) concept is adopted along with the membership-function-dependent (MFD) approach to facilitate the stability analysis and control synthesis. The stability conditions are summarized as sum-of-squares (SOS). The H <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">∞</sub> index is utilized to evaluate the tracking control performance, and the performance can be improved through minimizing the H <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">∞</sub> index. A detailed simulation example is provided to verify the effectiveness of the proposed method.
Type-2 Fuzzy Sets and Systems: An Overview [corrected reprint] As originally published in the February 2007 issue of IEEE Computational Intelligence Magazine, the above titled paper (ibid., vol. 2, no. 1, pp. 20-29, Feb 07) contained errors in mathematics that were introduced by the publisher. The corrected version is reprinted in its entirety.
Stability of switched positive linear systems with average dwell time switching. In this paper, the stability analysis problem for a class of switched positive linear systems (SPLSs) with average dwell time switching is investigated. A multiple linear copositive Lyapunov function (MLCLF) is first introduced, by which the sufficient stability criteria in terms of a set of linear matrix inequalities, are given for the underlying systems in both continuous-time and discrete-time contexts. The stability results for the SPLSs under arbitrary switching, which have been previously studied in the literature, can be easily obtained by reducing MLCLF to the common linear copositive Lyapunov function used for the system under arbitrary switching those systems. Finally, a numerical example is given to show the effectiveness and advantages of the proposed techniques.
Output tracking control for a class of continuous-time T-S fuzzy systems This paper investigates the problem of output tracking for nonlinear systems with actuator fault using interval type-2 (IT2) fuzzy model approach. An IT2 state-feedback fuzzy controller is designed to perform the tracking control problem, where the membership functions can be freely chosen since the number of fuzzy rules is different from that of the IT2 T-S fuzzy model. Based on Lyapunov stability theory, an existence condition of IT2 fuzzy H ∞ output tracking controller is obtained to guarantee that the output of the closed-loop IT2 control system can track the output of a given reference model well in the H ∞ sense. Finally, two illustrative examples are given to demonstrate the effectiveness and merits of the proposed design techniques.
Adaptive Fault-Tolerant Tracking Control for Discrete-Time Multiagent Systems via Reinforcement Learning Algorithm This article investigates the adaptive fault-tolerant tracking control problem for a class of discrete-time multiagent systems via a reinforcement learning algorithm. The action neural networks (NNs) are used to approximate unknown and desired control input signals, and the critic NNs are employed to estimate the cost function in the design procedure. Furthermore, the direct adaptive optimal controllers are designed by combining the backstepping technique with the reinforcement learning algorithm. Comparing the existing reinforcement learning algorithm, the computational burden can be effectively reduced by using the method of less learning parameters. The adaptive auxiliary signals are established to compensate for the influence of the dead zones and actuator faults on the control performance. Based on the Lyapunov stability theory, it is proved that all signals of the closed-loop system are semiglobally uniformly ultimately bounded. Finally, some simulation results are presented to illustrate the effectiveness of the proposed approach.
Robust fuzzy tracking control for robotic manipulators In this paper, a stable adaptive fuzzy-based tracking control is developed for robot systems with parameter uncertainties and external disturbance. First, a fuzzy logic system is introduced to approximate the unknown robotic dynamics by using adaptive algorithm. Next, the effect of system uncertainties and external disturbance is removed by employing an integral sliding mode control algorithm. Consequently, a hybrid fuzzy adaptive robust controller is developed such that the resulting closed-loop robot system is stable and the trajectory tracking performance is guaranteed. The proposed controller is appropriate for the robust tracking of robotic systems with system uncertainties. The validity of the control scheme is shown by computer simulation of a two-link robotic manipulator.
A Survey of Reachability and Controllability for Positive Linear Systems. This paper is a survey of reachability and controllability results for discrete-time positive linear systems. It presents a variety of criteria in both algebraic and digraph forms for recognising these fundamental system properties with direct implications not only in dynamic optimization problems (such as those arising in inventory and production control, manpower planning, scheduling and other areas of operations research) but also in studying properties of reachable sets, in feedback control problems, and others. The paper highlights the intrinsic combinatorial structure of reachable/controllable positive linear systems and reveals the monomial components of such systems. The system matrix decomposition into monomial components is demonstrated by solving some illustrative examples.
GloMoSim: a library for parallel simulation of large-scale wireless networks Abstract Anumber,of library-based parallel ,and sequential network,simulators ,have ,been ,designed. This paper describes a library, called GloMoSim (for Global Mobile system Simulator), for parallel simulation of wireless networks. GloMoSim has been designed to be ,extensible and composable: the communication ,protocol stack for wireless networks is divided into a set of layers, each with its own API. Models of protocols at one layer interact with those at a lower (or higher) layer only via these APIs. The modular,implementation,enables consistent comparison,of multiple,protocols ,at a ,given ,layer. The parallel implementation,of GloMoSim ,can be executed ,using a variety of conservative synchronization protocols, which include,the ,null ,message ,and ,conditional ,event algorithms. This paper describes the GloMoSim library, addresses,a number ,of issues ,relevant ,to its parallelization, and presents a set of experimental results onthe IBM 9076 SP, a distributed memory multi- computer. These experiments use models constructed from the library modules. 1,Introduction The,rapid ,advancement ,in portable ,computing platforms and wireless communication,technology has led tosignificant interest in mobile ,computing ,and mobile networking. Two primary forms of mobile ,computing ,are becoming popular: first, mobile computers continue to heavily use wired network infrastructures.Instead of being hardwired to a single location (or IP address), a computer can,dynamically ,move ,to multiple ,locations ,while maintaining,application transparency. Protocols such as
TAG: a Tiny AGgregation service for ad-hoc sensor networks We present the Tiny AGgregation (TAG) service for aggregation in low-power, distributed, wireless environments. TAG allows users to express simple, declarative queries and have them distributed and executed efficiently in networks of low-power, wireless sensors. We discuss various generic properties of aggregates, and show how those properties affect the performance of our in network approach. We include a performance study demonstrating the advantages of our approach over traditional centralized, out-of-network methods, and discuss a variety of optimizations for improving the performance and fault tolerance of the basic solution.
On the evolution of user interaction in Facebook Online social networks have become extremely popular; numerous sites allow users to interact and share content using social links. Users of these networks often establish hundreds to even thousands of social links with other users. Recently, researchers have suggested examining the activity network - a network that is based on the actual interaction between users, rather than mere friendship - to distinguish between strong and weak links. While initial studies have led to insights on how an activity network is structurally different from the social network itself, a natural and important aspect of the activity network has been disregarded: the fact that over time social links can grow stronger or weaker. In this paper, we study the evolution of activity between users in the Facebook social network to capture this notion. We find that links in the activity network tend to come and go rapidly over time, and the strength of ties exhibits a general decreasing trend of activity as the social network link ages. For example, only 30% of Facebook user pairs interact consistently from one month to the next. Interestingly, we also find that even though the links of the activity network change rapidly over time, many graph-theoretic properties of the activity network remain unchanged.
The Quadrature LC Oscillator: A Complete Portrait Based on Injection Locking We show that the quadrature LC oscillator is best treated as two strongly coupled, nominally identical oscillators that are locked to the same frequency. Differential equations that extend Adler&#39;s description of locking to strong injection reveal the full dynamics of this circuit. With a simplifying insight, the analysis reveals all the modes of the oscillator, their stability, the effects of mism...
Permanent-magnets linear actuators applicability in automobile active suspensions Significant improvements in automobile suspension performance are achieved by active systems. However, current active suspension systems are too expensive and complex. Developments occurring in power electronics, permanent magnet materials, and microelectronic systems justifies analysis of the possibility of implementing electromagnetic actuators in order to improve the performance of automobile suspension systems without excessively increasing complexity and cost. In this paper, the layouts of hydraulic and electromagnetic active suspensions are compared. The actuator requirements are calculated, and some experimental results proving that electromagnetic suspension could become a reality in the future are shown.
SPECS: A Lightweight Runtime Mechanism for Protecting Software from Security-Critical Processor Bugs Processor implementation errata remain a problem, and worse, a subset of these bugs are security-critical. We classified 7 years of errata from recent commercial processors to understand the magnitude and severity of this problem, and found that of 301 errata analyzed, 28 are security-critical. We propose the SECURITY-CRITICAL PROCESSOR ER- RATA CATCHING SYSTEM (SPECS) as a low-overhead solution to this problem. SPECS employs a dynamic verification strategy that is made lightweight by limiting protection to only security-critical processor state. As a proof-of- concept, we implement a hardware prototype of SPECS in an open source processor. Using this prototype, we evaluate SPECS against a set of 14 bugs inspired by the types of security-critical errata we discovered in the classification phase. The evaluation shows that SPECS is 86% effective as a defense when deployed using only ISA-level state; incurs less than 5% area and power overhead; and has no software run-time overhead.
Power Efficiency Comparison of Event-Driven and Fixed-Rate Signal Conversion and Compression for Biomedical Applications Energy-constrained biomedical recording systems need power-efficient data converters and good signal compression in order to meet the stringent power consumption requirements of many applications. In literature today, typically a SAR ADC in combination with digital compression is used. Recently, alternative event-driven sampling techniques have been proposed that incorporate compression in the ADC, such as level-crossing A/D conversion. This paper describes the power efficiency analysis of such level-crossing ADC (LCADC) and the traditional fixed-rate SAR ADC with simple compression. A model for the power consumption of the LCADC is derived, which is then compared to the power consumption of the SAR ADC with zero-order hold (ZOH) compression for multiple biosignals (ECG, EMG, EEG, and EAP). The LCADC is more power efficient than the SAR ADC up to a cross-over point in quantizer resolution (for example 8 bits for an EEG signal). This cross-over point decreases with the ratio of the maximum to average slope in the signal of the application. It also changes with the technology and design techniques used. The LCADC is thus suited for low to medium resolution applications. In addition, the event-driven operation of an LCADC results in fewer data to be transmitted in a system application. The event-driven LCADC without timer and with single-bit quantizer achieves a reduction in power consumption at system level of two orders of magnitude, an order of magnitude better than the SAR ADC with ZOH compression. At system level, the LCADC thus offers a big advantage over the SAR ADC.
1.2
0.2
0.2
0.2
0.2
0.1
0.05
0
0
0
0
0
0
0
Nonasymptotic Concentration Rates in Cooperative Learning–Part I: Variational Non-Bayesian Social Learning In this article, we studied the problem of cooperative inference where a group of agents interacts over a network and seeks to estimate a joint parameter that best explains a set of network-wide observationsusing local information only. Agents do not know the network topology or the observations of other agents. We explore a variational interpretation of the Bayesian posterior and its relation to stochastic mirror descent algorithm to prove that, under appropriate assumptions, the beliefs generated by the proposed algorithm concentrate around the true parameter exponentially fast. In part I of this two-part article series, we focus on providing a variational approach to distributed Bayesian filtering. Moreover, we develop computationally efficient algorithms for observation models in exponential families. We provide a novel nonasymptotic belief concentration analysis for distributednon-Bayesian learning on finite hypothesis sets. This new analysis is the basis for the results presented in Part II. We provide the first nonasymptotic belief concentration rate analysis for distributed non-Bayesian learning over networks on compact hypothesis sets in Part II. In addition, we provide extensive numerical analysis for various distributed inference tasks on networks for observational models in the exponential distribution families.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A 79 GHz Phase-Modulated 4 GHz-BW CW Radar Transmitter in 28 nm CMOS Millimeter-wave sensors perform robust and accurate remote motion sensing. We propose a 28 nm CMOS Radar TX that modulates a 79 GHz carrier with a 2 Gsps Pseudo-Noise sequence. The measured modulated output power at 79 GHz in 4 GHz BW is higher than +11 dBm (27°C), while the spurious emissions are below -20 dBc, fully satisfying the spectral mask regulations. The output RF BW where we can lock the injection-locked LO is 13 GHz. Overall, the TX draws 121 mW from a 0.9 V supply resulting in a record efficiency above 10%. More importantly, the TX is functional up to 125°C still providing more than +7 dBm output power over the same RF BW.
A 6-bit 1.3-GS/s Ping-Pong Domino-SAR ADC in 55-nm CMOS. This brief presents a 6-bit domino successive approximation register (SAR) analog-to-digital converter (ADC). The proposed domino-SAR ADC architecture provides high-speed domino operation and the offset immunity of SAR ADCs. The ping-pong operation enacts a master clock sampling scheme to achieve a sampling rate of 1.3 GHz. A prototype ADC chip was fabricated using a 55-nm CMOS technology. The chi...
A 2GS/s 8b time-interleaved SAR ADC for millimeter-wave pulsed radar baseband SoC. This paper presents a 2-GS/s 8-bit 16× time-interleaved (TI) analog-to-digital converter (ADC) for a millimeter-wave pulsed radar baseband system-on-chip (SoC). To suppress sampling timing errors among sub-ADCs, a foreground timing-skew calibration technique with small additional circuits is proposed. Measured spurious-free dynamic range and signal-to-noise distortion ratio at 1-GHz full Nyquist i...
Time-Interleaved SAR ADC with Background Timing-Skew Calibration for UWB Wireless Communication in IoT Systems. Ultra-wideband (UWB) wireless communication is prospering as a powerful partner of the Internet-of-things (IoT). Due to the ongoing development of UWB wireless communications, the demand for high-speed and medium resolution analog-to-digital converters (ADCs) continues to grow. The successive approximation register (SAR) ADCs are the most powerful candidate to meet these demands, attracting both industries and academia. In particular, recent time-interleaved SAR ADCs show that multi-giga sample per second (GS/s) can be achieved by overcoming the challenges of high-speed implementation of existing SAR ADCs. However, there are still critical issues that need to be addressed before the time-interleaved SAR ADCs can be applied in real commercial applications. The most well-known problem is that the time-interleaved SAR ADC architecture requires multiple sub-ADCs, and the mismatches between these sub-ADCs can significantly degrade overall ADC performance. And one of the most difficult mismatches to solve is the sampling timing skew. Recently, research to solve this timing-skew problem has been intensively studied. In this paper, we focus on the cutting-edge timing-skew calibration technique using a window detector. Based on the pros and cons analysis of the existing techniques, we come up with an idea that increases the benefits of the window detector-based timing-skew calibration techniques and minimizes the power and area overheads. Finally, through the continuous development of this idea, we propose a timing-skew calibration technique using a comparator offset-based window detector. To demonstrate the effectiveness of the proposed technique, intensive works were performed, including the design of a 7-bit, 2.5 GS/s 5-channel time-interleaved SAR ADC and various simulations, and the results prove excellent efficacy of signal-to-noise and distortion ratio (SNDR) and spurious-free dynamic range (SFDR) of 40.79 dB and 48.97 dB at Nyquist frequency, respectively, while the proposed window detector occupies only 6.5% of the total active area, and consumes 11% of the total power.
Accuracy-Enhanced Variance-Based Time-Skew Calibration Using SAR as Window Detector. This brief presents a time-interleaved (TI) successive-approximation-register (SAR) analog-to-digital converter (ADC) with an improved variance-based time-skew estimation technique, where we introduce a window detector (WD) based on a SAR ADC. It brings low hardware overhead and 10 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">4</sup> times faster convergence speed when compared to the prior variance-based time-skew calibration. Postlayout simulation results of a 10-bit, 2-GS/s TI-ADC in 28-nm CMOS process verify the effectiveness of the proposed calibration. The results indicate that the signal noise and distortion ratio/spurious free dynamic range of the ADC improved from 41.9/48.6 to 53.2/63.3 dB after calibration. The total area and power are 0.105 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> and 14.9 mW, respectively, where the WD occupies 0.0015 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> and 0.55 mW.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Directed diffusion: a scalable and robust communication paradigm for sensor networks Advances in processor, memory and radio technology will enable small and cheap nodes capable of sensing, communication and computation. Networks of such nodes can coordinate to perform distributed sensing of environmental phenomena. In this paper, we explore the directed diffusion paradigm for such coordination. Directed diffusion is datacentric in that all communication is for named data. All nodes in a directed diffusion-based network are application-aware. This enables diffusion to achieve energy savings by selecting empirically good paths and by caching and processing data in-network. We explore and evaluate the use of directed diffusion for a simple remote-surveillance sensor network.
ImageNet Large Scale Visual Recognition Challenge. The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.
DieHard: probabilistic memory safety for unsafe languages Applications written in unsafe languages like C and C++ are vulnerable to memory errors such as buffer overflows, dangling pointers, and reads of uninitialized data. Such errors can lead to program crashes, security vulnerabilities, and unpredictable behavior. We present DieHard, a runtime system that tolerates these errors while probabilistically maintaining soundness. DieHard uses randomization and replication to achieve probabilistic memory safety by approximating an infinite-sized heap. DieHard's memory manager randomizes the location of objects in a heap that is at least twice as large as required. This algorithm prevents heap corruption and provides a probabilistic guarantee of avoiding memory errors. For additional safety, DieHard can operate in a replicated mode where multiple replicas of the same application are run simultaneously. By initializing each replica with a different random seed and requiring agreement on output, the replicated version of Die-Hard increases the likelihood of correct execution because errors are unlikely to have the same effect across all replicas. We present analytical and experimental results that show DieHard's resilience to a wide range of memory errors, including a heap-based buffer overflow in an actual application.
A Clustering Scheme For Hierarchical Control In Mufti-Hop Wireless Networks In this paper we present a clustering scheme to create a hierarchical control structure for mufti-hop wireless networks. A cluster is defined as a subset of vertices, whose induced graph is connected. In addition, a cluster is required to obey certain constraints that are useful for management and scalability of the hierarchy. All these constraints cannot be met simultaneously for general graphs, but we show how such a clustering can be obtained for wireless network topologies. Finally, we present an efficient distributed implementation of our clustering algorithm for a set of wireless nodes to create the set of desired clusters.
A survey of state and disturbance observers for practitioners This paper gives a unified and historical review of observer design for the benefit of practitioners. It is unified in the sense that all observers are examined in terms of: 1) the assumed dynamic structure of the plant; 2) the required information, including the input signals and modeling information of the plant; and 3) the implementation equation of the observer. This allows a practitioner, with a particular observer design problem in mind, to quickly find a suitable solution. The review is historical in the sense that it follows the evolution of ideas in observer design in the last half century. From the distinction in problem formulation, required modeling information and the observer design goal, we can see two schools of thought: one is developed in the framework of modern control theory; the other is based on disturbance estimation, which has been, to some extent, overlooked
A MIMO decoder accelerator for next generation wireless communications In this paper, we present a multi-input-multi-output (MIMO) decoder accelerator architecture that offers versatility and reprogrammability while maintaining a very high performance-cost metric. The accelerator is meant to address the MIMO decoding bottlenecks associated with the convergence of multiple high-speed wireless standards onto a single device. It is scalable in the number of antennas, bandwidth, modulation format, and most importantly, present and emerging decoder algorithms. It features a Harvard-like architecture with complex vector operands and a deeply pipelined fixed-point complex arithmetic processing unit. When implemented on a Xilinx Virtex-4 LX200FF1513 field-programmable gate array (FPGA), the design occupied 43% of overall FPGA resources. The accelerator shows an advantage of up to three orders of magnitude (1000 times) in power-delay product for typical MIMO decoding operations relative to a general purpose DSP. When compared to dedicated application-specific IC (ASIC) implementations of mmse MIMO decoders, the accelerator showed a degradation of 340%-17%, depending on the actual ASIC being considered. In order to optimize the design for both speed and area, specific challenges had to be overcome. These include: definition of the processing units and their interconnection; proper dynamic scaling of the signal; and memory partitioning and parallelism.
27.9 A 200kS/s 13.5b integrated-fluxgate differential-magnetic-to-digital converter with an oversampling compensation loop for contactless current sensing High voltage applications such as electric motor controllers, solar panel power inverters, electric vehicle battery chargers, uninterrupted and switching mode power supplies benefit from the galvanic isolation of contactless current sensors (CCS) [1]. These include magnetic sensors that sense the magnetic field emanating from a current-carrying conductor. The offset and resolution of Hall-effect sensors is in the μT-level [1-3], in contrast to the μT-level accuracy of integrated-fluxgate (IFG) magnetometers [4]. Previously reported sampled-data closed-loop IFG readouts have limited BWs as their sampling frequencies (4) are limited to be less than or equal to the IFG excitation frequency, fEXC [5-7]. This paper describes a differential closed-loop IFG CCS with fs>fEXC. The differential architecture rejects magnetic stray fields and achieves 750x larger BW than the prior closed-loop IFG readouts [6-7] with 10×better offset than the Hall-effect sensors [1-3].
Power Efficiency Comparison of Event-Driven and Fixed-Rate Signal Conversion and Compression for Biomedical Applications Energy-constrained biomedical recording systems need power-efficient data converters and good signal compression in order to meet the stringent power consumption requirements of many applications. In literature today, typically a SAR ADC in combination with digital compression is used. Recently, alternative event-driven sampling techniques have been proposed that incorporate compression in the ADC, such as level-crossing A/D conversion. This paper describes the power efficiency analysis of such level-crossing ADC (LCADC) and the traditional fixed-rate SAR ADC with simple compression. A model for the power consumption of the LCADC is derived, which is then compared to the power consumption of the SAR ADC with zero-order hold (ZOH) compression for multiple biosignals (ECG, EMG, EEG, and EAP). The LCADC is more power efficient than the SAR ADC up to a cross-over point in quantizer resolution (for example 8 bits for an EEG signal). This cross-over point decreases with the ratio of the maximum to average slope in the signal of the application. It also changes with the technology and design techniques used. The LCADC is thus suited for low to medium resolution applications. In addition, the event-driven operation of an LCADC results in fewer data to be transmitted in a system application. The event-driven LCADC without timer and with single-bit quantizer achieves a reduction in power consumption at system level of two orders of magnitude, an order of magnitude better than the SAR ADC with ZOH compression. At system level, the LCADC thus offers a big advantage over the SAR ADC.
1.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
0
Load-Independent Current Control Technique of a Single-Inductor Multiple-Output Switching DC-DC converter. A load-independent current control technique is applied to a single-inductor multiple-output switching dc-dc boost converter implemented in a 0.5-μm standard CMOS technology. The total current is regulated by monitoring the instant of zero inductor current, and no additional power stage and reactive components are required. The output voltage settles within 25 μs with a voltage dip less than 50 mV...
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A 108-nW 0.8-mm<sup>2</sup> Analog Voice Activity Detector Featuring a Time-Domain CNN With Sparsity-Aware Computation and Sparsified Quantization in 28-nm CMOS This article reports a passive analog feature extractor for realizing an area-and-power-efficient voice activity detector (VAD) for voice-control edge devices. It features a switched-capacitor circuit as the time-domain convolutional neural network (TD-CNN) that extracts the 1-bit features for the subsequent binarized neural network (BNN) classifier. TD-CNN also allows area savings and low latency by evaluating the features temporally. The applied sparsity-aware computation (SAC) and sparsified quantization (SQ) aid in enlarging the output swing and reducing the model size without sacrificing the classification accuracy. With these techniques, the diversified output also aids in desensitizing the 1-bit quantizer from the offset and noise. The TD-CNN and BNN are trained as a single network to improve the VAD reconfigurability. Benchmarking with the prior art, our VAD in 28-nm CMOS scores a 90% (94%) speech (non-speech) hit rate on the TIMIT dataset with small power (108 nW) and area (0.8 mm2). We can configure the TD-CNN as a feature extractor for keyword spotting (KWS). It achieves a 93.5% KWS accuracy with the Google speech command dataset (two keywords). With two TD-CNNs operating simultaneously to extract more features, the KWS accuracy is 94.3%.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A monolithic current-mode CMOS DC-DC converter with on-chip current-sensing technique A monolithic current-mode CMOS DC-DC converter with integrated power switches and a novel on-chip current sensor for feedback control is presented in this paper. With the proposed accurate on-chip current sensor, the sensed inductor current, combined with the internal ramp signal, can be used for current-mode DC-DC converter feedback control. In addition, no external components and no extra I/O pins are needed for the current-mode controller. The DC-DC converter has been fabricated with a standard 0.6-μm CMOS process. The measured absolute error between the sensed signal and the inductor current is less than 4%. Experimental results show that this converter with on-chip current sensor can operate from 300 kHz to 1 MHz with supply voltage from 3 to 5.2 V, which is suitable for single-cell lithium-ion battery supply applications. The output ripple voltage is about 20 mV with a 10-μF off-chip capacitor and 4.7-μH off-chip inductor. The power efficiency is over 80% for load current from 50 to 450 mA.
Integrated Class-D Amplifier With Active Current Sensing Suitable for Alternating Current Switches An integrated class-D amplifier with active current sensing suitable for AC switches is presented in this paper. The proposed current-sensing technique is used to sense and control the inductor current at the output stage and allows the current-sensing element to be integrated into class-D amplifiers. To verify the performance of the proposed circuits, a class-D amplifier was designed and fabricated with 3.3-V 0.35- mum double-polyquadruple-metal CMOS technology on a chip area of 1.79 times 1.45 mm. The maximum power efficiency of the class-D amplifier can be up to 94% for an 8-Omega load. These active current-sensing circuits can be applied to other power converters with AC switches.
Loop gain analysis and development of high-speed high-accuracy current sensors for switching converters Loop gain analysis for performance evaluation of current sensors for switching converters is presented. The MOS transistor scaling technique is reviewed and employed in developing high-speed and high-accuracy current-sensors with offset-current cancellation. Using a standard 0.35μm CMOS process, and integrated full-range inductor current sensor for a boost converter is designed. It operated at a supply voltage of 1.5 V with a DC loop gain of 38 dB, and a unity gain frequency of 10 MHz. The sensor worked properly at a converter switching frequency of 500 kHz.
An Accurate, Continuous, and Lossless Self-Learning CMOS Current-Sensing Scheme for Inductor-Based DC-DC Converters Sensing current is a fundamental function in power supply circuits, especially as it generally applies to protection and feedback control. Emerging state-of-the-art switching supplies, in fact, are now exploring ways to use this sensed-current information to improve transient response, power efficiency, and compensation performance by appropriately self-adjusting, on the fly, frequency, inductor r...
A 5-MHz 91% peak-power-efficiency buck regulator with auto-selectable peak- and valley-current control This paper presents a multi-MHz buck regulator for portable applications using an auto-selectable peak- and valley-current control (ASPVCC) scheme. The proposed ASPVCC scheme and the dynamically-biased shunt feedback in the current sensors relax the settling-time requirement of the current sensing and improve the sensing speed. The proposed converter can thus operate at high switching frequencies with a wide range of duty ratios for reducing the required inductance. Implemented in a 0.35-μm CMOS process, the proposed buck converter can operate at 5-MHz with a duty-ratio range of 0.6, use a small-value off-chip inductor of 1 μH, and achieve 91% peak power efficiency.
Dithering Skip Modulator with a Novel Load Sensor for Ultra-wide-load High-Efficiency DC-DC Converters Dithering skip mode with a novel load sensor for DC-DC converters is proposed to maintain a high efficiency over a wide load range. Due to the efficiency drop of the transition from the pulse-width modulation (PWM) to pulse-frequency modulation (PFM), a novel dithering skip modulation (DSM) is introduced to smooth the efficiency curve. Importantly, DSM mode can dynamically skip the number of gate driving pulses, which is inverse proportional to load current. Besides, a novel proposed load sensor can automatically select the optimum modulation method from these three modulation methods without an external selection pin. Simulation results shows DSM can maintain the efficiency of converters as high as about 89% over a wide load current range from 3mA to 500mA
A Simple Control Scheme to Avoid the Sensing Noise for the DC-DC Buck Converter With Synchronous Rectifier. The common method of improving efficiency when a synchronous buck converter operates at a light load is to adopt a current sensor to generate a turn-off command for the synchronous rectifying switch (SRS). However, a current-sensing technique with passive components is easily affected by noise and ringing, causing abnormal switching of the SRS. This study presents a simple control scheme for deter...
A Low Ripple Switched-Capacitor Voltage Regulator Using Flying Capacitance Dithering. In this work, a switched-capacitor voltage regulator (SCVR) that dithers flying capacitance to reduce output voltage ripple is presented, and the benefits of such ripple reduction are investigated. In the proposed technique, SC converters are designed to run at the maximum available frequency, and the flying capacitance for different phases is adjusted according to load current change through comp...
Design Strategy for Step-Up Charge Pumps With Variable Integer Conversion Ratios Method in identifying all possible configurations of 2-phase charge pumps giving an integer conversion ratio with a fixed number of flying capacitors is presented. A systematic strategy is proposed to design an integrated charge pump as an example with a variable gain of 6times and 7times in a standard 0.35-mum CMOS process using only 4 flying capacitors. Conduction loss is considered and minimized. Measurement results verified the validity of the design methodology
The PARSEC benchmark suite: characterization and architectural implications This paper presents and characterizes the Princeton Application Repository for Shared-Memory Computers (PARSEC), a benchmark suite for studies of Chip-Multiprocessors (CMPs). Previous available benchmarks for multiprocessors have focused on high-performance computing applications and used a limited number of synchronization methods. PARSEC includes emerging applications in recognition, mining and synthesis (RMS) as well as systems applications which mimic large-scale multithreaded commercial programs. Our characterization shows that the benchmark suite covers a wide spectrum of working sets, locality, data sharing, synchronization and off-chip traffic. The benchmark suite has been made available to the public.
On the time-complexity of broadcast in multi-hop radio networks: an exponential gap between determinism and randomization The time-complexity of deterministic and randomized protocols for achieving broadcast (distributing a message from a source to all other nodes) in arbitrary multi-hop radio networks is investigated. In many such networks, communication takes place in synchronous time-slots. A processor receives a message at a certain time-slot if exactly one of its neighbors transmits at that time-slot. We assume no collision-detection mechanism; i.e., it is not always possible to distinguish the case where no neighbor transmits from the case where several neighbors transmit simultaneously. We present a randomized protocol that achieves broadcast in time which is optimal up to a logarithmic factor. In particular, with probability 1 --E, the protocol achieves broadcast within O((D + log n/s) 'log n) time-slots, where n is the number of processors in the network and D its diameter. On the other hand, we prove a linear lower bound on the deterministic time-complexity of broadcast in this model. Namely, we show that any deterministic broadcast protocol requires 8(n) time-slots, even if the network has diameter 3, and n is known to all processors. These two results demonstrate an exponential gap in complexity between randomization and determinism.
An analytical study of a structured overlay in the presence of dynamic membership In this paper, we present an analytical study of dynamic membership (aka churn) in structured peer-to-peer networks. We use a fluid model approach to describe steady-state or transient phenomena and apply it to the Chord system. For any rate of churn and stabilization rates and any system size, we accurately account for the functional form of the probability of network disconnection as well as the fraction of failed or incorrect successor and finger pointers. We show how we can use these quantities to predict both the performance and consistency of lookups under churn. All theoretical predictions match simulation results. The analysis includes both features that are generic to structured overlays deploying a ring as well as Chord-specific details and opens the door to a systematic comparative analysis of, at least, ring-based structured overlay systems under churn.
Electromagnetic regenerative suspension system for ground vehicles This paper considers an electromagnetic regenerative suspension system (ERSS) that recovers the kinetic energy originated from vehicle vibration, which is previously dissipated in traditional shock absorbers. It can also be used as a controllable damper that can improve the vehicle's ride and handling performance. The proposed electromagnetic regenerative shock absorbers (ERSAs) utilize a linear or a rotational electromagnetic generator to convert the kinetic energy from suspension vibration into electricity, which can be used to reduce the load on the alternator so as to improve fuel efficiency. A complete ERSS is discussed here that includes the regenerative shock absorber, the power electronics for power regulation and suspension control, and an electronic control unit (ECU). Different shock absorber designs are proposed and compared for simplicity, efficiency, energy density, and controlled suspension performances. Both simulation and experiment results are presented and discussed.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.002126
0.003226
0.0027
0.002619
0.002325
0.001919
0.001613
0.000084
0.000001
0
0
0
0
0
General Top/Bottom-Plate Charge Recycling Technique for Integrated Switched Capacitor DC-DC Converters. Energy loss due to top/bottom plate parasitic capacitances is one of the factors determining the efficiency of integrated switched capacitor DC/DC converters. This loss is particularly significant when MOS gate or deep trench capacitors are used. We propose a technique for top/bottom-plate charge recycling that can be applied with low overhead independently of the converter architecture. Two examp...
A 20-pW Discontinuous Switched-Capacitor Energy Harvester for Smart Sensor Applications. We present a discontinuous harvesting approach for switch capacitor dc-dc converters that enables ultralow-power energy harvesting. Smart sensor applications rely on ultralow-power energy harvesters to scavenge energy across a wide range of ambient power levels and charge the battery. Based on the key observation that energy source efficiency is higher than charge pump efficiency, we present a dis...
An Arithmetic Progression Switched-Capacitor DC-DC Converter with Soft VCR Transitions Achieving 93.7% Peak Efficiency and 400 mA Output Current Dynamic source adaptation and supply modulation can benefit the power efficiency and system functionality of energy-harvesting interfaces, voltage-scalable SoCs, device drivers, power amplifiers, and others. A switched-capacitor (SC) DC-DC converter can achieve high power conversion efficiency (PCE) and power density at the hundreds-of-mW. Several reconfigurable SC topologies emerged to generate m...
Conductance Modulation Techniques in Switched-Capacitor DC-DC Converter for Maximum-Efficiency Tracking and Ripple Mitigation in 22 nm Tri-Gate CMOS Switch conductance modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, in 22 nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switch-size scaling scheme for maximum efficiency tracking across a wide range of voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures and, (ii) a simple active ripple mitigation technique that modulates the gate drive of select MOSFET switches effectively in all conversion modes. Efficiency improvements up to 15% are measured under low output voltage and load conditions. Load-independent output ripple of $ 50 mV is achieved, enabling reduced interleaving. Test chip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Switched Capacitor Energy Harvester Based on a Single-Cycle Criterion for MPPT to Eliminate Storage Capacitor. A single-cycle criterion maximum power point tracking (MPPT) technique is proposed to eliminate the need for bulky on-chip capacitors in the energy harvesting system for Internet of Everything (IoE). The conventional time-domain MPPT features ultra-low power consumption; however, it also requires a nanofarad-level capacitor for fine time resolution. The proposed maximum power monitoring does not r...
Automotive Switched-Capacitor DC–DC Converter With High BW Power Mirror and Dual Supply Driver This paper presents circuit topologies and implementations for SC DC-DC converters with controlled charging current. It focuses on the power current mirror that regulates the charging current of the flying capacitor and on the switch drivers. The bandwidth and transient response of the power mirror are improved by inserting an auxiliary current mirror in its input signal path. Conventional switch ...
A Switched Capacitor Multiple Input Single Output Energy Harvester (Solar + Piezo) Achieving 74.6% Efficiency With Simultaneous MPPT This paper presents an inductor-less switched capacitor based energy harvester, which can simultaneously harvest from 2 energy sources (Solar + Piezo). The proposed harvester employs maximum power point tracking algorithm, by changing the conversion ratios of the charge pumps for piezo and solar sources, and output voltage control by varying the switching frequency. The proposed MPPT algorithm can match the input impedance of two sources simultaneously. Implemented in 65nm CMOS, the proposed harvester can generate a fixed output between 1.8 and 2.5 V output while delivering 35 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> to 70 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> power with a peak power conversion efficiency of 74.6%.
Tapestry: a resilient global-scale overlay for service deployment We present Tapestry, a peer-to-peer overlay routing infrastructure offering efficient, scalable, location-independent routing of messages directly to nearby copies of an object or service using only localized resources. Tapestry supports a generic decentralized object location and routing applications programming interface using a self-repairing, soft-state-based routing layer. The paper presents the Tapestry architecture, algorithms, and implementation. It explores the behavior of a Tapestry deployment on PlanetLab, a global testbed of approximately 100 machines. Experimental results show that Tapestry exhibits stable behavior and performance as an overlay, despite the instability of the underlying network layers. Several widely distributed applications have been implemented on Tapestry, illustrating its utility as a deployment infrastructure.
Cellular Logic-in-Memory Arrays As a direct consequence of large-scale integration, many advantages in the design, fabrication, testing, and use of digital circuitry can be achieved if the circuits can be arranged in a two-dimensional iterative, or cellular, array of identical elementary networks, or cells. When a small amount of storage is included in each cell, the same array may be regarded either as a logically enhanced memory array, or as a logic array whose elementary gates and connections can be "programmed" to realize a desired logical behavior.
On implementing omega with weak reliability and synchrony assumptions We study the feasibility and cost of implementing Ω---a fundamental failure detector at the core of many algorithms---in systems with weak reliability and synchrony assumptions. Intuitively, Ω allows processes to eventually elect a common leader. We first give an algorithm that implements Ω in a weak system S where processes are synchronous, but: (a) any number of them may crash, and (b) only the output links of an unknown correct process are eventually timely (all other links can be asynchronous and/or lossy). This is in contrast to previous implementations of Ω which assume that a quadratic number of links are eventually timely, or systems that are strong enough to implement the eventually perfect failure detector P. We next show that implementing Ω in S is expensive: even if we want an implementation that tolerates just one process crash, all correct processes (except possibly one) must send messages forever; moreover, a quadratic number of links must carry messages forever. We then show that with a small additional assumption---the existence of some unknown correct process whose asynchronous links are lossy but fair---we can implement Ω efficiently: we give an algorithm for Ω such that eventually only one process (the elected leader) sends messages.
Bandwidth-efficient management of DHT routing tables Today an application developer using a distributed hash table (DHT) with n nodes must choose a DHT protocol from the spectrum between O(1) lookup protocols [9, 18] and O(log n) protocols [20-23, 25, 26]. O(1) protocols achieve low latency lookups on small or low-churn networks because lookups take only a few hops, but incur high maintenance traffic on large or high-churn networks. O(log n) protocols incur less maintenance traffic on large or high-churn networks but require more lookup hops in small networks. Accordion is a new routing protocol that does not force the developer to make this choice: Accordion adjusts itself to provide the best performance across a range of network sizes and churn rates while staying within a bounded bandwidth budget. The key challenges in the design of Accordion are the algorithms that choose the routing table's size and content. Each Accordion node learns of new neighbors opportunistically, in a way that causes the density of its neighbors to be inversely proportional to their distance in ID space from the node. This distribution allows Accordion to vary the table size along a continuum while still guaranteeing at most O(log n) lookup hops. The user-specified bandwidth budget controls the rate at which a node learns about new neighbors. Each node limits its routing table size by evicting neighbors that it judges likely to have failed. High churn (i.e., short node lifetimes) leads to a high eviction rate. The equilibrium between the learning and eviction processes determines the table size. Simulations show that Accordion maintains an efficient lookup latency versus bandwidth tradeoff over a wider range of operating conditions than existing DHTs.
A 52 <formula formulatype="inline"><tex Notation="TeX">$\mu$</tex> </formula>W Wake-Up Receiver With <formula formulatype="inline"><tex Notation="TeX">$-$</tex> </formula>72 dBm Sensitivity Using an Uncertain-IF Architecture A dedicated wake-up receiver may be used in wireless sensor nodes to control duty cycle and reduce network latency. However, its power dissipation must be extremely low to minimize the power consumption of the overall link. This paper describes the design of a 2 GHz receiver using a novel ldquouncertain-IFrdquo architecture, which combines MEMS-based high-Q filtering and a free-running CMOS ring o...
Exploration of Constantly Connected Dynamic Graphs Based on Cactuses. We study the problem of exploration by a mobile entity (agent) of a class of dynamic networks, namely constantly connected dynamic graphs. This problem has already been studied in the case where the agent knows the dynamics of the graph and the underlying graph is a ring of n vertices [5]. In this paper, we consider the same problem and we suppose that the underlying graph is a cactus graph (a connected graph in which any two simple cycles have at most one vertex in common). We propose an algorithm that allows the agent to explore these dynamic graphs in at most 2(O)(root log n)(n) time units. We show that the lower bound of the algorithm is 2(Omega)(root log n)(n) time units.
An Event-Driven Quasi-Level-Crossing Delta Modulator Based on Residue Quantization This article introduces a digitally intensive event-driven quasi-level-crossing (quasi-LC) delta-modulator analog-to-digital converter (ADC) with adaptive resolution (AR) for Internet of Things (IoT) wireless networks, in which minimizing the average sampling rate for sparse input signals can significantly reduce the power consumed in data transmission, processing, and storage. The proposed AR quasi-LC delta modulator quantizes the residue voltage signal with a 4-bit asynchronous successive-approximation-register (SAR) sub-ADC, which enables a straightforward implementation of LC and AR algorithms in the digital domain. The proposed modulator achieves data compression by means of a globally signal-dependent average sampling rate and achieves AR through a digital multi-level comparison window that overcomes the tradeoff between the dynamic range and the input bandwidth in the conventional LC ADCs. Engaging the AR algorithm reduces the average sampling rate by a factor of 3 at the edge of the modulator’s signal bandwidth. The proposed modulator is fabricated in 28-nm CMOS and achieves a peak SNDR of 53 dB over a signal bandwidth of 1.42 MHz while consuming 205 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> and an active area of 0.0126 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> .
1.1
0.1
0.1
0.1
0.1
0.1
0.05
0
0
0
0
0
0
0
Bit fusion: bit-level dynamically composable architecture for accelerating deep neural networks Hardware acceleration of Deep Neural Networks (DNNs) aims to tame their enormous compute intensity. Fully realizing the potential of acceleration in this domain requires understanding and leveraging algorithmic properties of DNNs. This paper builds upon the algorithmic insight that bitwidth of operations in DNNs can be reduced without compromising their classification accuracy. However, to prevent loss of accuracy, the bitwidth varies significantly across DNNs and it may even be adjusted for each layer individually. Thus, a fixed-bitwidth accelerator would either offer limited benefits to accommodate the worst-case bitwidth requirements, or inevitably lead to a degradation in final accuracy. To alleviate these deficiencies, this work introduces dynamic bit-level fusion/decomposition as a new dimension in the design of DNN accelerators. We explore this dimension by designing Bit Fusion, a bit-flexible accelerator, that constitutes an array of bit-level processing elements that dynamically fuse to match the bitwidth of individual DNN layers. This flexibility in the architecture enables minimizing the computation and the communication at the finest granularity possible with no loss in accuracy. We evaluate the benefits of Bit Fusion using eight real-world feed-forward and recurrent DNNs. The proposed microarchitecture is implemented in Verilog and synthesized in 45 nm technology. Using the synthesis results and cycle accurate simulation, we compare the benefits of Bit Fusion to two state-of-the-art DNN accelerators, Eyeriss [1] and Stripes [2]. In the same area, frequency, and process technology, Bit Fusion offers 3.9X speedup and 5.1X energy savings over Eyeriss. Compared to Stripes, Bit Fusion provides 2.6X speedup and 3.9X energy reduction at 45 nm node when Bit Fusion area and frequency are set to those of Stripes. Scaling to GPU technology node of 16 nm, Bit Fusion almost matches the performance of a 250-Watt Titan Xp, which uses 8-bit vector instructions, while Bit Fusion merely consumes 895 milliwatts of power.
Procrustes: a Dataflow and Accelerator for Sparse Deep Neural Network Training The success of DNN pruning has led to the development of energy-efficient inference accelerators that support pruned models with sparse weight and activation tensors. Because the memory layouts and dataflows in these architectures are optimized for the access patterns during inference, however, they do not efficiently support the emerging sparse training techniques. In this paper, we demonstrate (a) that accelerating sparse training requires a co-design approach where algorithms are adapted to suit the constraints of hardware, and (b) that hardware for sparse DNN training must tackle constraints that do not arise in inference accelerators. As proof of concept, we adapt a sparse training algorithm to be amenable to hardware acceleration; we then develop dataflow, data layout, and load-balancing techniques to accelerate it. The resulting system is a sparse DNN training accelerator that produces pruned models with the same accuracy as dense models without first training, then pruning, and finally retraining, a dense model. Compared to training the equivalent unpruned models using a state-of-the-art DNN accelerator without sparse training support, Procrustes consumes up to 3.26× less energy and offers up to 4× speedup across a range of models, while pruning weights by an order of magnitude and maintaining unpruned accuracy.
DRQ: Dynamic Region-based Quantization for Deep Neural Network Acceleration Quantization is an effective technique for Deep Neural Network (DNN) inference acceleration. However, conventional quantization techniques are either applied at network or layer level that may fail to exploit fine-grained quantization for further speedup, or only applied on kernel weights without paying attention to the feature map dynamics that may lead to lower NN accuracy. In this paper, we propose a dynamic region-based quantization, namely DRQ, which can change the precision of a DNN model dynamically based on the sensitive regions in the feature map to achieve greater acceleration while reserving better NN accuracy. We propose an algorithm to identify the sensitive regions and an architecture that utilizes a variable-speed mixed-precision convolution array to enable the algorithm with better performance and energy efficiency. Our experiments on a wide variety of networks show that compared to a coarse-grained quantization accelerator like “Eyeriss”, DRQ can achieve 92% performance gain and 72% energy reduction with less then 1% accuracy loss. Compared to the state-of-the-art mixed-precision quantization accelerator “OLAccel”, DRQ can also achieve 21% performance gain and 33% energy reduction with 3% prediction accuracy improvement which is quite impressive for inference.
Tetris: re-architecting convolutional neural network computation for machine learning accelerators Inference efficiency is the predominant consideration in designing deep learning accelerators. Previous work mainly focuses on skipping zero values to deal with remarkable ineffectual computation, while zero bits in non-zero values, as another major source of ineffectual computation, is often ignored. The reason lies on the difficulty of extracting essential bits during operating multiply-and-accumulate (MAC) in the processing element. Based on the fact that zero bits occupy as high as 68.9% fraction in the overall weights of modern deep convolutional neural network models, this paper firstly proposes a weight kneading technique that could eliminate ineffectual computation caused by either zero value weights or zero bits in non-zero weights, simultaneously. Besides, a split-and-accumulate (SAC) computing pattern in replacement of conventional MAC, as well as the corresponding hardware accelerator design called Tetris are proposed to support weight kneading at the hardware level. Experimental results prove that Tetris could speed up inference up to 1.50x, and improve power efficiency up to 5.33x compared with the state-of-the-art baselines.
2-in-1 Accelerator: Enabling Random Precision Switch for Winning Both Adversarial Robustness and Efficiency ABSTRACTThe recent breakthroughs of deep neural networks (DNNs) and the advent of billions of Internet of Things (IoT) devices have excited an explosive demand for intelligent IoT devices equipped with domain-specific DNN accelerators. However, the deployment of DNN accelerator enabled intelligent functionality into real-world IoT devices still remains particularly challenging. First, powerful DNNs often come at prohibitive complexities, whereas IoT devices often suffer from stringent resource constraints. Second, while DNNs are vulnerable to adversarial attacks especially on IoT devices exposed to complex real-world environments, many IoT applications require strict security. Existing DNN accelerators mostly tackle only one of the two aforementioned challenges (i.e., efficiency or adversarial robustness) while neglecting or even sacrificing the other. To this end, we propose a 2-in-1 Accelerator, an integrated algorithm-accelerator co-design framework aiming at winning both the adversarial robustness and efficiency of DNN accelerators. Specifically, we first propose a Random Precision Switch (RPS) algorithm that can effectively defend DNNs against adversarial attacks by enabling random DNN quantization as an in-situ model switch during training and inference. Furthermore, we propose a new precision-scalable accelerator featuring (1) a new precision-scalable MAC unit architecture which spatially tiles the temporal MAC units to boost both the achievable efficiency and flexibility and (2) a systematically optimized dataflow that is searched by our generic accelerator optimizer. Extensive experiments and ablation studies validate that our 2-in-1 Accelerator can not only aggressively boost both the adversarial robustness and efficiency of DNN accelerators under various attacks, but also naturally support instantaneous robustness-efficiency trade-offs adapting to varied resources without the necessity of DNN retraining. We believe our 2-in-1 Accelerator has opened up an exciting perspective for robust and efficient accelerator design.
Scale-out acceleration for machine learning. The growing scale and complexity of Machine Learning (ML) algorithms has resulted in prevalent use of distributed general-purpose systems. In a rather disjoint effort, the community is focusing mostly on high performance single-node accelerators for learning. This work bridges these two paradigms and offers CoSMIC, a full computing stack constituting language, compiler, system software, template architecture, and circuit generators, that enable programmable acceleration of learning at scale. CoSMIC enables programmers to exploit scale-out acceleration using FPGAs and Programmable ASICs (P-ASICs) from a high-level and mathematical Domain-Specific Language (DSL). Nonetheless, CoSMIC does not require programmers to delve into the onerous task of system software development or hardware design. CoSMIC achieves three conflicting objectives of efficiency, automation, and programmability, by integrating a novel multi-threaded template accelerator architecture and a cohesive stack that generates the hardware and software code from its high-level DSL. CoSMIC can accelerate a wide range of learning algorithms that are most commonly trained using parallel variants of gradient descent. The key is to distribute partial gradient calculations of the learning algorithms across the accelerator-augmented nodes of the scale-out system. Additionally, CoSMIC leverages the parallelizability of the algorithms to offer multi-threaded acceleration within each node. Multi-threading allows CoSMIC to efficiently exploit the numerous resources that are becoming available on modern FPGAs/P-ASICs by striking a balance between multi-threaded parallelism and single-threaded performance. CoSMIC takes advantage of algorithmic properties of ML to offer a specialized system software that optimizes task allocation, role-assignment, thread management, and internode communication. We evaluate the versatility and efficiency of CoSMIC for 10 different machine learning applications from various domains. On average, a 16-node CoSMIC with UltraScale+ FPGAs offers 18.8× speedup over a 16-node Spark system with Xeon processors while the programmer only writes 22--55 lines of code. CoSMIC offers higher scalability compared to the state-of-the-art Spark; scaling from 4 to 16 nodes with CoSMIC yields 2.7× improvements whereas Spark offers 1.8×. These results confirm that the full-stack approach of CoSMIC takes an effective and vital step towards enabling scale-out acceleration for machine learning.
Shortcut Mining: Exploiting Cross-Layer Shortcut Reuse in DCNN Accelerators Off-chip memory traffic has been a major performance bottleneck in deep learning accelerators. While reusing on-chip data is a promising way to reduce off-chip traffic, the opportunity on reusing shortcut connection data in deep networks (e.g., residual networks) have been largely neglected. Those shortcut data accounts for nearly 40% of the total feature map data. In this paper, we propose Shortcut Mining, a novel approach that "mines" the unexploited opportunity of on-chip data reusing. We introduce the abstraction of logical buffers to address the lack of flexibility in existing buffer architecture, and then propose a sequence of procedures which, collectively, can effectively reuse both shortcut and non-shortcut feature maps. The proposed procedures are also able to reuse shortcut data across any number of intermediate layers without using additional buffer resources. Experiment results from prototyping on FPGAs show that, the proposed Shortcut Mining achieves 53.3%, 58%, and 43% reduction in off-chip feature map traffic for SqueezeNet, ResNet-34, and ResNet-152, respectively and a 1.93X increase in throughput compared with a state-of-the-art accelerator.
DRISA: a DRAM-based reconfigurable in-situ accelerator. Data movement between the processing units and the memory in traditional von Neumann architecture is creating the "memory wall" problem. To bridge the gap, two approaches, the memory-rich processor (more on-chip memory) and the compute-capable memory (processing-in-memory) have been studied. However, the first one has strong computing capability but limited memory capacity/bandwidth, whereas the second one is the exact the opposite. To address the challenge, we propose DRISA, a DRAM-based Reconfigurable In-Situ Accelerator architecture, to provide both powerful computing capability and large memory capacity/bandwidth. DRISA is primarily composed of DRAM memory arrays, in which every memory bitline can perform bitwise Boolean logic operations (such as NOR). DRISA can be reconfigured to compute various functions with the combination of the functionally complete Boolean logic operations and the proposed hierarchical internal data movement designs.We further optimize DRISA to achieve high performance by simultaneously activating multiple rows and subarrays to provide massive parallelism, unblocking the internal data movement bottlenecks, and optimizing activation latency and energy. We explore four design options and present a comprehensive case study to demonstrate significant acceleration of convolutional neural networks. The experimental results show that DRISA can achieve 8.8x speedup and 1.2x better energy efficiency compared with ASICs, and 7.7x speedup and 15x better energy efficiency over GPUs with integer operations.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Fog computing and its role in the internet of things Fog Computing extends the Cloud Computing paradigm to the edge of the network, thus enabling a new breed of applications and services. Defining characteristics of the Fog are: a) Low latency and location awareness; b) Wide-spread geographical distribution; c) Mobility; d) Very large number of nodes, e) Predominant role of wireless access, f) Strong presence of streaming and real time applications, g) Heterogeneity. In this paper we argue that the above characteristics make the Fog the appropriate platform for a number of critical Internet of Things (IoT) services and applications, namely, Connected Vehicle, Smart Grid, Smart Cities, and, in general, Wireless Sensors and Actuators Networks (WSANs).
Theory and Implementation of an Analog-to-Information Converter using Random Demodulation The new theory of compressive sensing enables direct analog-to-information conversion of compressible signals at sub-Nyquist acquisition rates. The authors develop new theory, algorithms, performance bounds, and a prototype implementation for an analog-to-information converter based on random demodulation. The architecture is particularly apropos for wideband signals that are sparse in the time-frequency plane. End-to-end simulations of a complete transistor-level implementation prove the concept under the effect of circuit nonidealities.
Gossip-based Reputation Aggregation for Unstructured Peer-to-Peer Networks Peer-to-peer (P2P) reputation systems are needed to evaluate the trustworthiness of participating peers and to combat selfish and malicious peer behaviors. The reputation system collects locally generated peer feedbacks and aggregates them to yield global reputation scores. Development of decentralized reputation system is in great demand for unstructured P2P networks since most P2P applications on the Internet are unstructured. In the absence of fast hashing and searching mechanisms, how to perform efficient reputation aggregation is a major challenge on unstructured P2P computing. We propose a novel reputation aggregation scheme called GossipTrust. This system computes global reputation scores of all nodes concurrently. By resorting to a gossip protocol and leveraging the power nodes, GossipTrust is adapted to peer dynamics and robust to disturbance by malicious peers. Simulation experiments demonstrate the system as scalable, accurate, robust and fault-tolerant. These results prove the claimed advantages in low aggregation overhead, storage efficiency, and scoring accuracy in unstructured P2P networks. With minor modifications, the system is also applicable to structured P2P systems with projected better performance.
A continuous-time ripple reduction technique for spinning-current Hall sensors The intrinsic offset of Hall sensors can be reduced with the help of the spinning-current technique, which modulates this offset away from the signal band. The resulting offset ripple can then be removed by a low-pass filter, which, however, limits the sensor's bandwidth. This paper presents a ripple-reduction technique that does not require a low-pass filter. Measurements on a Hall sensor system implemented in a 0.18μm CMOS process show that the technique can reduce the residual ripple by at least 40dB - to the same level as the sensor's noise.
An Energy-Efficient SAR ADC With Event-Triggered Error Correction This brief presents an energy-efficient fully differential 10-bit successive approximation register (SAR) analog-to-digital converter (ADC) with a resolution of 10 bits and a sampling rate of 320 kS/s. The optimal capacitor split and bypass number is analyzed to achieve the highest switching energy efficiency. The common-mode voltage level remains constant during the MSB-capacitor switching cycles. To minimize nonlinearity due to charge averaging voltage offset or DAC array mismatch, an event-triggered error correction method is employed as a redundant cycle for detecting digital code errors within 1 least significant bit (LSB). A test chip was fabricated using the 180-nm CMOS process and occupied a 0.0564-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> core area. Under a regular 0.65-V supply voltage, the ADC achieved an effective number of bits of 9.61 bits and a figure of merit (FOM) of 6.38 fJ/conversion step, with 1.6- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${ \mu }\text{W}$ </tex-math></inline-formula> power dissipation for a low-frequency input. The measured differential and integral nonlinearity results are within 0.30 LSB and 0.43 LSB, respectively.
1.023676
0.023111
0.022667
0.022222
0.022222
0.012778
0.008148
0.002978
0.00035
0
0
0
0
0
Software-defined radio receiver: dream to reality This article describes a fully integrated 90 nm CMOS software-defined radio receiver operating in the 800 MHz to 5 GHz band. Unlike the classical SDR paradigm, which digitizes the whole spectrum uniformly, this receiver acts as a signal conditioner for the analog-to-digital converters, emphasizing only the wanted channel. Thus, the ADCs operate with modest resolution and sample rate, consuming low power. This approach makes portable SDR a reality
New Architecture for a Wireless Smart Sensor Based on a Software-Defined Radio. Today, wireless sensor technology is based on monolithic transceivers that optimize cost but have a rigid hardware architecture. In this paper, a new architecture for wireless sensors is presented. It is based on a software-defined radio concept and shows impressive adaptability to external conditions. The proposed architecture, which is called the wireless ultrasmart sensor (WUSS), enables the us...
Design considerations and implementation of a DSP-based car-radio IF Processor This paper describes design considerations and the implementation of a software defined radio receiver encom- passing intermediate frequency (IF) digitization. The proposed system-on-chip performs the demodulation of both AM and FM stereo signals, digitized at the IF by means of a high dynamic range sigma-delta bandpass A/D converter. The chosen architec- ture combines hardware and software functions, trading flexible programmability with area occupancy. The software includes also true blind equalization of the FM signals, resulting in the rejection of the neighbor channels and of any other interfering signal, even under severe multipath conditions. The described chip, realized in a 0.18- m CMOS technology, occupies an area of 15.2 mm and is enclosed in a 64-pin package.
A Reconfigurable Narrow-Band MB-OFDM UWB Receiver Architecture This paper presents an analysis on the receiver front-end architectures for multiband orthogonal frequency-division multiplexing ultra-wide-band (UWB) terminals. An interference analysis is carried out in order to derive the main linearity specifications of the receiver front-end. A reconfigurable narrow-band architecture is introduced that can best cope with the main challenges of the UWB receivers: broadband impedance matching and high out-of-band linearity. Simulation results show that linearity requirements can be achieved with sizeable margin.
A Tier 3 Software Defined AM Radio Software Defined Radio (SDR) is one of the key technologies in the wireless communication industry. A Tier-3 SDR,or Ideal Software Radio receiver is a device that performs all the demodulation (RF, IF and base-band) entirely in the digital domain. In this work we present the implementation and performance measurement of an AM receiver that belongs to this class of ideal software radios. The receiver is implemented in a Field Programmable Gate Array device.
A 6.5 GHz wideband CMOS low noise amplifier for multi-band use LNA based on a noise-cancelled common gate topology spans 0.1 to 6.5 GHz with a gain of 19 dB, a NF of 3 dB, and s11 < -10 dB. It is realized in 0.13-mum CMOS and dissipates 12 mW
Design and Analysis of a Performance-Optimized CMOS UWB Distributed LNA In this paper, the systematic design and analysis of a CMOS performance-optimized distributed low-noise amplifier (DLNA) comprising bandwidth-enhanced cascode cells will be presented. Each cascode cell employs an inductor between the common-source and common-gate devices to enhance the bandwidth, while reducing the high-frequency input-referred noise. The noise analysis and optimization of the DLN...
Low-Power Programmable Gain CMOS Distributed LNA A design methodology for low power MOS distributed amplifiers (DAs) is presented. The bias point of the MOS devices is optimized so that the DA can be used as a low-noise amplifier (LNA) in broadband applications. A prototype 9-mW LNA with programmable gain was implemented in a 0.18-/spl mu/m CMOS process. The LNA provides a flat gain, S/sub 21/, of 8 /spl plusmn/ 0.6dB from DC to 6.2 GHz, with an...
Cache operations by MRU change The performance of set associative caches is analyzed. The method used is to group the cache lines into regions according to their positions in the replacement stacks of a cache, and then to observe how the memory access of a CPU is distributed over these regions. Results from the preserved CPU traces show that the memory accesses are heavily concentrated on the most recently used (MRU) region in the cache. The concept of MRU change is introduced; the idea is to use the event that the CPU accesses a non-MRU line to approximate the time the CPU is changing its working set. The concept is shown to be useful in many aspects of cache design and performance evaluation, such as comparison of various replacement algorithms, improvement of prefetch algorithms, and speedup of cache simulation.
Principles of Distributed Systems, 13th International Conference, OPODIS 2009, Nîmes, France, December 15-18, 2009. Proceedings
A new concept for wireless reconfigurable receivers In this article we present the Self-Adaptive Universal Receiver (SAUR), a novel wireless reconfigurable receiver architecture. This scheme is based on blind recognition of the system in use, operating on a new radio interface comprising two functional phases. The first phase performs a wideband analysis (WBA) on the received signal to determine its standard. The second phase corresponds to demodulation. Here we only focus on the WBA phase, which consists of an iterative process to find the bandwidth compatible with the associated signal processing techniques. The blind standard recognition performed in the last iteration step of this process uses radial basis function neural networks. This allows a strong analogy between our approach and conventional pattern recognition problems. The efficiency of this type of blind recognition is illustrated with the results of extensive simulations performed in our laboratory using true data of received signals.
Permanent-magnets linear actuators applicability in automobile active suspensions Significant improvements in automobile suspension performance are achieved by active systems. However, current active suspension systems are too expensive and complex. Developments occurring in power electronics, permanent magnet materials, and microelectronic systems justifies analysis of the possibility of implementing electromagnetic actuators in order to improve the performance of automobile suspension systems without excessively increasing complexity and cost. In this paper, the layouts of hydraulic and electromagnetic active suspensions are compared. The actuator requirements are calculated, and some experimental results proving that electromagnetic suspension could become a reality in the future are shown.
Variable Off-Time Control Loop for Current-Mode Floating Buck Converters in LED Driving Applications A versatile controller architecture, used in current-mode floating buck converters for LED driving, is developed. State-of-the-art controllers rely on a fixed switching period and variable duty cycle, focusing on current averaging circuits. Instead, the proposed controller architecture is based on fixed peak current and adaptable off time as the average current control method. The control loop is comprised of an averaging block, transconductance amplifier, and an innovative time modulator. This modulator is intended to provide constant control loop response regardless of input voltage, current storage inductor, and number of LEDs in order to improve converter applicability for LED drivers. Fabricated in a 5 V standard 0.5 μm CMOS technology, the prototype controller is implemented and tested in a current-mode floating buck converter. The converter exhibits sound continuous conduction mode (CCM) operation for input voltages between 11 and 20 V, and a wide inductor range of 100-1000 μH. In all instances, the measured average LED current variation was lower than 10% of the desired value. A maximum conversion efficiency of 91% is obtained when driving 50 mA through four LEDs (with 14 V input voltage and an inductor of 470 μH). A stable CCM converter operation is also proven by simulation for nine LEDs and 45 V input voltage.
A Hybrid 1<sup>st</sup>/2<sup>nd</sup>-Order VCO-Based CTDSM With Rail-to-Rail Artifact Tolerance for Bidirectional Neural Interface Bi-directional brain machine interfaces enable simultaneous brain activity monitoring and neural modulation. However stimulation artifact can saturate instrumentation front-end, while concurrent on-site recording is needed. This brief presents a voltage controlled-oscillator (VCO) based continuous-time <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\rm \Delta \Sigma $ </tex-math></inline-formula> modulator (CTDSM) with rail-to-rail input range and fast artifact tracking. A hybrid <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}/2^{nd}$ </tex-math></inline-formula> -order loop is designed to achieve high dynamic range (DR) and large input range. Stimulation artifact is detected by a phase counter and compensated by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}$ </tex-math></inline-formula> -order loop. The residue signal is digitized by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2^{nd}$ </tex-math></inline-formula> -order loop for high precision. Redundancy between the two loops is implemented as feedback capacitors elements with non-binary ratio to guarantee feedback stability and linearity. Fabricated in a 55-nm CMOS process, the prototype achieves 65.7dB SNDR across 10 kHz bandwidth with a full scale of 200 mV <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">pp</sub> . And a ±1.2 V input range is achieved to suppress artifacts. Saline based experiment with simultaneous stimulation and recording demonstrates that the implemented system can track and tolerate rail-to-rail stimulation artifact within 30 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{s}$ </tex-math></inline-formula> while small neural signal can be continuously monitored.
1.018146
0.014667
0.013333
0.013333
0.006667
0.003665
0.000883
0.000038
0
0
0
0
0
0
Leveraging on-chip voltage regulators as a countermeasure against side-channel attacks Side-channel attacks have become a significant threat to the integrated circuit security. Circuit level techniques are proposed in this paper as a countermeasure against side-channel attacks. A distributed on-chip power delivery system consisting of multi-level switched capacitor (SC) voltage converters is proposed where the individual interleaved stages are turned on and turned off either based on the workload information or pseudo-randomly to scramble the power consumption profile. In the case that the changes in the workload demand do not trigger the power delivery system to turn on or off individual stages, the active stages are reshuffled with so called converter-reshuffling to insert random spikes in the power consumption profile. An entropy based metric is developed to evaluate the security-performance of the proposed converter-reshuffling technique as compared to three other existing on-chip power delivery schemes. The increase in the power trace entropy with CoRe scheme is also demonstrated with simulation results to further verify the theoretical analysis.
12.4 A 10mW fully integrated 2-to-13V-input buck-boost SC converter with 81.5% peak efficiency In recent years, significant progress has been made on switched-capacitor DC-DC converters as they enable fully integrated on-chip power management. New converter topologies overcame the fixed input-to-output voltage limitation and achieved high efficiency at high power densities [1-6]. SC converters are attractive to not only mobile handheld devices with small input and output voltages, but also for power conversion in IoE, industrial and automotive applications, etc. Such applications need to be capable of handling widely varying input voltages of more than 10V, which requires a large amount of conversion ratios [1-3]. The goal is to achieve a fine granularity with the least number of flying capacitors. In [1] an SC converter was introduced that achieves these goals at low input voltage VIN ≤ 2.5V. [2] shows good efficiency up to VIN = 8V while its conversion ratio is restricted to ≤1/2 with a limited, non-equidistant number of conversion steps. A particular challenge arises with increasing input voltage as several loss mechanisms like parasitic bottom-plate losses and gate-charge losses of high-voltage transistors become of significant influence. High input voltages require supporting circuits like level shifters, auxiliary supply rails etc., which allocate additional area and add losses [2-5]. The combination of both increasing voltage and conversion ratios (VCR) lowers the efficiency and the achievable output power of SC converters. [3] and [5] use external capacitors to enable higher output power, especially for higher VIN. However, this is contradictory to the goal of a fully integrated power supply.
20.3 A feedforward controlled on-chip switched-capacitor voltage regulator delivering 10W in 32nm SOI CMOS On-chip (or fully integrated) switched-capacitor (SC) voltage regulators (SCVR) have recently received a lot of attention due to their ease of monolithic integration. The use of deep trench capacitors can lead to SCVR implementations that simultaneously achieve high efficiency, high power density, and fast response time. For the application of granular power distribution of many-core microprocessor systems, the on-chip SCVR must maintain an output voltage above a certain minimum level Uout, min in order for the microprocessor core to meet setup time requirements. Following a transient load change, the output voltage typically exhibits a droop due to parasitic inductances and resistances in the power distribution network. Therefore, the steady-state output voltage is kept high enough to ensure VOUT >Vout, min at all times, thereby introducing an output voltage overhead that leads to increased system power consumption. The output voltage droop can be reduced by implementing fast regulation and a sufficient amount of on-chip decoupling capacitance. However, a large amount of on-chip decoupling capacitance is needed to significantly reduce the droop, and it becomes impractical to implement owing to the large chip area overhead required. This paper presents a feedforward control scheme that significantly reduces the output voltage droop in the presence of a large input voltage droop following a transient event. This in turn reduces the required output voltage overhead and may lead to significant overall system power savings.
A Highly Efficient Reconfigurable Charge Pump Energy Harvester With Wide Harvesting Range and Two-Dimensional MPPT for Internet of Things. A monolithic microwatt-level charge pump energy harvester is proposed for smart nodes of Internet of Things (IOT) networks. Due to the variation of the available voltage and power in IOT scenarios, the charge pump was optimized by the proposed architecture and circuit level innovations. First, a reconfigurable charge pump is introduced to provide the hybrid conversion ratios (CRs) as 1[1/3]× up to...
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Software complexity measurement Inappropriate use of software complexity measures can have large, damaging effects by rewarding poor programming practices and demoralizing good programmers. Software complexity measures must be critically evaluated to determine the ways in which they can best be used.
Information-driven dynamic sensor collaboration This article overviews the information-driven approach to sensor collaboration in ad hoc sensor networks. The main idea is for a network to determine participants in a "sensor collaboration" by dynamically optimizing the information utility of data for a given cost of communication and computation. A definition of information utility is introduced, and several approximate measures of the information utility are developed for reasons of computational tractability. We illustrate the use of this approach using examples drawn from tracking applications
Fully integrated wideband high-current rectifiers for inductively powered devices This paper describes the design and implementation of fully integrated rectifiers in BiCMOS and standard CMOS technologies for rectifying an externally generated RF carrier signal in inductively powered wireless devices, such as biomedical implants, radio-frequency identification (RFID) tags, and smartcards to generate an on-chip dc supply. Various full-wave rectifier topologies and low-power circuit design techniques are employed to decrease substrate leakage current and parasitic components, reduce the possibility of latch-up, and improve power transmission efficiency and high-frequency performance of the rectifier block. These circuits are used in wireless neural stimulating microsystems, fabricated in two processes: the University of Michigan's 3-μm 1M/2P N-epi BiCMOS, and the AMI 1.5-μm 2M/2P N-well standard CMOS. The rectifier areas are 0.12-0.48 mm2 in the above processes and they are capable of delivering >25mW from a receiver coil to the implant circuitry. The performance of these integrated rectifiers has been tested and compared, using carrier signals in 0.1-10-MHz range.
Standards for XML and Web Services Security XML schemas convey the data syntax and semantics for various application domains, such as business-to-business transactions, medical records, and production status reports. However, these schemas seldom address security issues, which can lead to a worst-case scenario of systems and protocols with no security at all. At best, they confine security to transport level mechanisms such as secure sockets layer (SSL). On the other hand, the omission of security provisions from domain schemas opens the way for generic security specifications based on XML document and grammar extensions. These specifications are orthogonal to domain schemas but integrate with them to support a variety of security objectives, such as confidentiality, integrity, and access control. In 2002, several specifications progressed toward providing a comprehensive standards framework for secure XML-based applications. The paper shows some of the most important specifications, the issues they address, and their dependencies.
Random walks in peer-to-peer networks: algorithms and evaluation We quantify the effectiveness of random walks for searching and construction of unstructured peer-to-peer (P2P) networks. We have identified two cases where the use of random walks for searching achieves better results than flooding: (a) when the overlay topology is clustered, and (b) when a client re-issues the same query while its horizon does not change much. Related to the simulation of random walks is also the distributed computation of aggregates, such as averaging. For construction, we argue that an expander can be maintained dynamically with constant operations per addition. The key technical ingredient of our approach is a deep result of stochastic processes indicating that samples taken from consecutive steps of a random walk on an expander graph can achieve statistical properties similar to independent sampling. This property has been previously used in complexity theory for construction of pseudorandom number generators. We reveal another facet of this theory and translate savings in random bits to savings in processing overhead.
CoCo: coding-based covert timing channels for network flows In this paper, we propose CoCo, a novel framework for establishing covert timing channels. The CoCo covert channel modulates the covert message in the inter-packet delays of the network flows, while a coding algorithm is used to ensure the robustness of the covert message to different perturbations. The CoCo covert channel is adjustable: by adjusting certain parameters one can trade off different features of the covert channel, i.e., robustness, rate, and undetectability. By simulating the CoCo covert channel using different coding algorithms we show that CoCo improves the covert robustness as compared to the previous research, while being practically undetectable.
IEEE 802.11 wireless LAN implemented on software defined radio with hybrid programmable architecture This paper describes a prototype software defined radio (SDR) transceiver on a distributed and heterogeneous hybrid programmable architecture; it consists of a central processing unit (CPU), digital signal processors (DSPs), and pre/postprocessors (PPPs), and supports both Personal Handy Phone System (PHS), and IEEE 802.11 wireless local area network (WLAN). It also supports system switching between PHS and WLAN and over-the-air (OTA) software downloading. In this paper, we design an IEEE 802.11 WLAN around the SDR; we show the software architecture of the SDR prototype and describe how it handles the IEEE 802.11 WLAN protocol. The medium access control (MAC) sublayer functions are executed on the CPU, while the physical layer (PHY) functions such as modulation/demodulation are processed by the DSPs; higher speed digital signal processes are run on the PPP implemented on a field-programmable gate array (FPGA). The most difficult problem in implementing the WLAN in this way is meeting the short interframe space (SIFS) requirement of the IEEE 802.11 standard; we elucidate the potential weakness of the current configuration and specify a way of implementing the IEEE 802.11 protocol that avoids this problem. This paper also describes an experimental evaluation of the prototype for WLAN use, the results of which agree well with computer-simulation results.
Understanding contention-based channels and using them for defense Microarchitectural resources such as caches and predictors can be used to leak information across security domains. Significant prior work has demonstrated attacks and defenses for specific types of such microarchitectural side and covert channels. In this paper, we introduce a general mathematical study of microarchitectural channels using information theory. Our conceptual contribution is a simple mathematical abstraction that captures the common characteristics of all microarchitectural channels. We call this the Bucket model and it reveals that microarchitectural channels are fundamentally different from side and covert channels in networking. We then quantify the communication capacity of several microarchitectural covert channels (including channels that rely on performance counters, AES hardware and memory buses) and measure bandwidths across both KVM based heavy-weight virtualization and light-weight operating-system level isolation. We demonstrate channel capacities that are orders of magnitude higher compared to what was previously considered possible. Finally, we introduce a novel way of detecting intelligent adversaries that try to hide while running covert channel eavesdropping attacks. Our method generalizes a prior detection scheme (that modeled static adversaries) by introducing noise that hides the detection process from an intelligent eavesdropper.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.1
0.05
0.025
0.02
0
0
0
0
0
0
0
0
0
0
Internet of Things (IoT) for Smart Precision Agriculture and Farming in Rural Areas. Internet of Things (IoT) gives a new dimension in the area of smart farming and agriculture domain. With the use of fog computing and WiFi-based long distance network in IoT, it is possible to connect the agriculture and farming bases situated in rural areas efficiently. To focus on the specific requirements, we propose a scalable network architecture for monitoring and controlling agriculture and...
IIP3 Enhancement of Subthreshold Active Mixers This brief presents a modified subthreshold Gilbert mixer that includes an inductor between the drain of the radio frequency (RF) transistor and the sources of the LO transistors, an inductive source degeneration for the RF transistor, and a cross-coupling capacitor between the source of the RF transistor and the drain of the RF transistor in the other mixer branch to improve third-order distortion characteristics. This linearization technique enables a third-order intermodulation intercept point (IIP3) improvement of at least 10 dB compared to other subthreshold mixers. A 2.4-GHz mixer was designed and simulated using 0.11- μm CMOS technology. In the typical corner case of the postlayout simulations, the linearized mixer achieves 6.7-dBm IIP3, 8.6-dB voltage gain, and 19.2-dB single-sideband noise figure with a low power consumption of 0.423 mW.
A Low-IF/Zero-IF Reconfigurable Analog Baseband IC With an I/Q Imbalance Cancellation Scheme A low-IF/zero-IF reconfigurable analog baseband IC embodying an automatic I/Q imbalance cancellation scheme is reported. The chip, which comprises a down-conversion mixer, an analog baseband filter, and a programmable gain amplifier, achieves a high image rejection of 55 dB without any calibration. It operates over a wide radio frequency range of 0.4-2.4 GHz, and has a cut-off frequency range of 0.3-30 MHz in zero-intermediate frequency (IF) mode and an IF range of 0.2-6 MHz in low-IF mode. The circuit in the receiver chain draws only 4.5-6.2 mA, and the clock generator including LO buffers draws 1.8-6.3 mA from a 1.2-V supply. The chip, implemented in 90-nm CMOS technology, occupies an area of 1.1 .
A 1.9mW 750kb/s 2.4GHz F-OOK transmitter with symmetric FM template and high-point modulation PLL. This paper describes a frequency-domain on-off keying (F-OOK) modulation method which utilizes power detection of both a carrier and a sideband by modulating the carrier frequency with a pre-selected modulation template. Compared to on-off keying and binary frequency-shift keying modulations, more efficient bandwidth control is achieved for the same data rate with a symmetric FM template. Since th...
Smart ITS Sensor for the Transportation Planning Based on IoT Approaches Using Serverless and Microservices Architecture. Currently, there are many challenges in the transportation scope that researchers are attempting to resolve, and one of them is transportation planning. The main contribution of this paper is the design and implementation of an ITS (Intelligent Transportation Systems) smart sensor prototype that incorporates and combines the Internet of Things (IoT) approaches using the Serverless and Microservice...
A 915 MHz, Ultra-Low Power 2-Tone Transceiver With Enhanced Interference Resilience. An ultra-low-power RF envelope-detection radio has been designed for low-power short-range wireless applications. It introduces an interference rejection technique that improves the in-band selectivity by 24.5 dB. The transmitter adopts the power-efficient direct-modulation architecture, and the receiver maintains the simplicity and low power consumption of envelope detection receivers. In the 915...
A Bayesian Method for the Induction of Probabilistic Networks from Data This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction of probabilistic expert systems. We extend the basic method to handle missing data and hidden (latent) variables. We show how to perform probabilistic inference by averaging over the inferences of multiple belief networks. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. Finally, we relate the methods in this paper to previous work, and we discuss open problems.
Distributed estimation and quantization An algorithm is developed for the design of a nonlinear, n-sensor, distributed estimation system subject to communication and computation constraints. The algorithm uses only bivariate probability distributions and yields locally optimal estimators that satisfy the required system constraints. It is shown that the algorithm is a generalization of the classical Lloyd-Max results
Distributed average consensus with least-mean-square deviation We consider a stochastic model for distributed average consensus, which arises in applications such as load balancing for parallel processors, distributed coordination of mobile autonomous agents, and network synchronization. In this model, each node updates its local variable with a weighted average of its neighbors' values, and each new value is corrupted by an additive noise with zero mean. The quality of consensus can be measured by the total mean-square deviation of the individual variables from their average, which converges to a steady-state value. We consider the problem of finding the (symmetric) edge weights that result in the least mean-square deviation in steady state. We show that this problem can be cast as a convex optimization problem, so the global solution can be found efficiently. We describe some computational methods for solving this problem, and compare the weights and the mean-square deviations obtained by this method and several other weight design methods.
Exploiting ILP, TLP, and DLP with the polymorphous TRIPS architecture This paper describes the polymorphous TRIPS architecture which can be configured for different granularities and types of parallelism. TRIPS contains mechanisms that enable the processing cores and the on-chip memory system to be configured and combined in different modes for instruction, data, or thread-level parallelism. To adapt to small and large-grain concurrency, the TRIPS architecture contains four out-of-order, 16-wide-issue Grid Processor cores, which can be partitioned when easily extractable fine-grained parallelism exists. This approach to polymorphism provides better performance across a wide range of application types than an approach in which many small processors are aggregated to run workloads with irregular parallelism. Our results show that high performance can be obtained in each of the three modes--ILP, TLP, and DLP-demonstrating the viability of the polymorphous coarse-grained approach for future microprocessors.
Joint mismatch and channel compensation for high-speed OFDM receivers with time-interleaved ADCs Analog-to-digital converters (ADCs) with high sampling rates and output resolution are required for the design of mostly digital transceivers in emerging multi-Gigabit communication systems. A promising approach is to use a time-interleaved (TI) architecture with slower sub-ADCs in parallel, but mismatch among the sub-ADCs, if left uncompensated, can cause error floors in receiver performance. Conventional mismatch compensation schemes typically have complexity (in terms of number of multiplications) that increases with the desired resolution at the output of the TI-ADC. In this paper, we investigate an alternative approach, in which mismatch and channel dispersion are compensated jointly, with the performance metric being overall link reliability rather than ADC performance. For an OFDM system, we characterize the structure of mismatch-induced interference, and demonstrate the efficacy of a frequency-domain interference suppression scheme whose complexity is independent of constellation size (which determines the desired resolution). Numerical results from computer simulation and from experiments on a hardware prototype show that the performance with the proposed joint mismatch and channel compensation technique is close to that without mismatch. While the proposed technique works with offline estimates of mismatch parameters, we provide an iterative, online method for joint estimation of mismatch and channel parameters which leverages the training overhead already available in communication signals.
Design and implementation of a reconfigurable FIR filter Finite impulse response (FIR) filters are very important blocks in digital communication systems. Many efforts have been made to improve the filter performance, e.g., less hardware and higher speed. In addition, software radio has recently gained much attention due to the need for integrated and reconfigurable communication systems. To this end, reconfigurability has become an important issue for the future filter design. In this paper, we present a digit-reconfigurable FIR filter architecture with the finest granularity. The proposed architecture is implemented in a single-poly quadruple-metal 0.35-μm CMOS technology. Measurement results show that the fabricated chip consumes 16.5 mW of power when operating at 86 MHz under 2.5 V.
Towards elastic SDR architectures using dynamic task management. SDR platforms integrating several types and numbers of processing elements in System-on-Chips become an attractive solution for baseband processing in wireless systems. In order to cope with the diversity of protocol applications and the heterogeneity of multi-core architectures, a hierarchical approach for workload distribution is proposed in this paper. Specifically, a system-level scheduler is employed to map applications to multiple processing clusters, complemented with a cluster-level scheduler - the CoreManager - for dynamic resource allocation and configuration as well as for task and data scheduling. A performance analysis of the proposed approach is presented, which shows the advantages of dynamic scheduling against a static approach for variable workloads in the LTE-Advanced uplink multi-user scenarios.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.2
0.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
0
A Delay-Locked Loop Synchronization Scheme for High-Frequency Multiphase Hysteretic DC-DC Converters This paper reports a delay-locked loop (DLL) based hysteretic controller for high-frequency multiphase dc-dc buck converters. The DLL control loop employs the switching frequency of a hysteretic comparator as reference to automatically synchronize the remaining phases and eliminate the need for external synchronization. A dedicated duty cycle control loop is used to enable current sharing and ripple cancellation. We demonstrate a four-phase high-frequency buck converter that operates at 25-70 MHz with fast hysteretic control and output conversion range of 17.5%-80%. The converter achieves an efficiency of 83% at 2 W and 80% at 3.3 W. The circuit has been implemented in standard 0.5 mum 5 V CMOS process.
A 10/30MHz PWM buck converter with an accuracy-improved ramp generator In this paper, a 10/30MHz PWM buck converter with an accuracy-improved ramp generator is proposed. To extend the minimum duty ratio of the converter, the valley of the ramp signal is controlled to be larger than 0.2V by the threshold voltage of an NMOS transistor. To shorten the delay time of the comparator in the ramp generator, a high speed two-stage push-pull comparator is presented. Simulation results of the ramp generator with switching frequencies from 10MHz to 30MHz show that the error due to process variations for the ramp, clock and maximum duty ratio are not larger than 10%. To avoid the design of high speed current sensor and maximize the unity-gain frequency of the regulation loop, voltage mode control with type III compensation is adopted for the converter. Measurement results show that the proposed buck converter with 500mA of load range has good steady-state and transient performances for 10MHz and 30MHz of switching frequencies, respectively.
A 1.2-A Buck-Boost LED Driver With On-Chip Error Averaged SenseFET-Based Current Sensing Technique. This paper presents circuit techniques to improve the efficiency of high-current LED drivers. An error-averaged, senseFET-based current sensing technique is used to regulate the LED current accurately. Because the proposed scheme eliminates the series current-regulation element present in all conventional LED drivers, it greatly improves efficiency and reduces cost. The converter operates in three...
Low-Power Cmos Ramp Generator Circuit For Dc-Dc Converters Ramp-signal generators are particularly critical in controlling the frequency and duty-cycle of pulsewidth modulated (PWM) switching supplies. They are normally implemented with a timed charging current source into a capacitor and a reset switch controlled by two comparators whose reference signals set the lower and upper limits of the ramp. A fast comparator is required to ensure the reset operation is short and therefore mitigate its adverse effects on switching supply frequency. Unfortunately, the resulting delay requirements of the comparator are often stringent and difficult to meet under low power conditions. To alleviate the comparator's bandwidth requirement, a scheme is proposed by which the circuit generates a non-ideal ramp with its reset time to be 10% of period utilizing the fact that the ramp signal is needed in DC-DC converter's controller feedback loop only until the duty-cycle is set. Therefore, the condition on comparator delay and its respective power is relaxed. The proposed scheme was experimentally verified with a 770 kHz ramp generator prototype embedded in a current-mode controller buck DC-DC converter built in a 0.5-mu m CMOS process, which achieved 28 mV amplitude accuracy with current consumption of 256 mu A.
High Frequency Buck Converter Design Using Time-Based Control Techniques Time-based control techniques for the design of high switching frequency buck converters are presented. Using time as the processing variable, the proposed controller operates with CMOS-level digital-like signals but without adding any quantization error. A ring oscillator is used as an integrator in place of conventional opamp-RC or G m-C integrators while a delay line is used to perform voltage to time conversion and to sum time signals. A simple flip-flop generates pulse-width modulated signal from the time-based output of the controller. Hence time-based control eliminates the need for wide bandwidth error amplifier, pulse-width modulator (PWM) in analog controllers or high resolution analog-to-digital converter (ADC) and digital PWM in digital controllers. As a result, it can be implemented in small area and with minimal power. Fabricated in a 180 nm CMOS process, the prototype buck converter occupies an active area of 0.24 mm2, of which the controller occupies only 0.0375 mm2. It operates over a wide range of switching frequencies (10-25 MHz) and regulates output to any desired voltage in the range of 0.6 V to 1.5 V with 1.8 V input voltage. With a 500 mA step in the load current, the settling time is less than 3.5 μs and the measured reference tracking bandwidth is about 1 MHz. Better than 94% peak efficiency is achieved while consuming a quiescent current of only 2 μA/MHz.
A 5-MHz 91% peak-power-efficiency buck regulator with auto-selectable peak- and valley-current control This paper presents a multi-MHz buck regulator for portable applications using an auto-selectable peak- and valley-current control (ASPVCC) scheme. The proposed ASPVCC scheme and the dynamically-biased shunt feedback in the current sensors relax the settling-time requirement of the current sensing and improve the sensing speed. The proposed converter can thus operate at high switching frequencies with a wide range of duty ratios for reducing the required inductance. Implemented in a 0.35-μm CMOS process, the proposed buck converter can operate at 5-MHz with a duty-ratio range of 0.6, use a small-value off-chip inductor of 1 μH, and achieve 91% peak power efficiency.
Fully Integrated Capacitive DC–DC Converter With All-Digital Ripple Mitigation Technique This paper presents an adaptive all-digital ripple mitigation technique for fully integrated capacitive dc-dc converters. Ripple control is achieved using a two-pronged approach where coarse ripple control is achieved by varying the size of the bucket capacitance, and fine control is achieved by charge/discharge time modulation of the bucket capacitors used to transfer the charge between the input and output, both of which are completely digital techniques. A dual-loop control was used to achieve regulation and ripple control. The primary single-bound hysteretic control loop achieves voltage regulation and the secondary loop is responsible for ripple control. The dual-loop control modulates the charge/discharge pulse width in a hysteretic variable-frequency environment using a simple digital pulse width modulator. The fully integrated converter was implemented in IBM's 130-nm CMOS process. Ripple reduces from 98 to 30 mV, when ripple control secondary loop is enabled for a load of 0.3 V and 4 mA without significantly impacting the converter's core efficiency. Measurement results show constant ripple, independent of output voltage. The converter achieves a maximum efficiency of 70% for Vin= 1.3 V and Vout= 0.5 V and a maximum power density of 24.5 mW/mm2, including the areas for the decoupling capacitor. The maximum power density increases to 68 mW/mm2 if the decoupling capacitor is assumed to be already present as part of the digital design.
Variable Off-Time Control Loop for Current-Mode Floating Buck Converters in LED Driving Applications A versatile controller architecture, used in current-mode floating buck converters for LED driving, is developed. State-of-the-art controllers rely on a fixed switching period and variable duty cycle, focusing on current averaging circuits. Instead, the proposed controller architecture is based on fixed peak current and adaptable off time as the average current control method. The control loop is comprised of an averaging block, transconductance amplifier, and an innovative time modulator. This modulator is intended to provide constant control loop response regardless of input voltage, current storage inductor, and number of LEDs in order to improve converter applicability for LED drivers. Fabricated in a 5 V standard 0.5 μm CMOS technology, the prototype controller is implemented and tested in a current-mode floating buck converter. The converter exhibits sound continuous conduction mode (CCM) operation for input voltages between 11 and 20 V, and a wide inductor range of 100-1000 μH. In all instances, the measured average LED current variation was lower than 10% of the desired value. A maximum conversion efficiency of 91% is obtained when driving 50 mA through four LEDs (with 14 V input voltage and an inductor of 470 μH). A stable CCM converter operation is also proven by simulation for nine LEDs and 45 V input voltage.
A 90-nm CMOS Doherty power amplifier with minimum AM-PM distortion A linear Doherty amplifier is presented. The design reduces AM-PM distortion by optimizing the device-size ratio of the carrier and peak amplifiers to cancel each other's phase variation. Consequently, this design achieves both good linearity and high backed-off efficiency associated with the Doherty technique, making it suitable for systems with large peak-to-average power ratio (WLAN, WiMAX, etc.). The fully integrated design has on-chip quadrature hybrid coupler, impedance transformer, and output matching networks. The experimental 90-nm CMOS prototype operating at 3.65 GHz achieves 12.5% power-added efficiency (PAE) at 6 dB back-off, while exceeding IEEE 802.11a -25 dB error vector magnitude (EVM) linearity requirement (using 1.55-V supply). A 28.9 dBm maximum Psat is achieved with 39% PAE (using 1.85-V supply). The active die area is 1.2 mm2.
Network-based robust H∞ control of systems with uncertainty This paper is concerned with the design of robust H"~ controllers for uncertain networked control systems (NCSs) with the effects of both the network-induced delay and data dropout taken into consideration. A new analysis method for H"~ performance of NCSs is provided by introducing some slack matrix variables and employing the information of the lower bound of the network-induced delay. The designed H"~ controller is of memoryless type, which can be obtained by solving a set of linear matrix inequalities. Numerical examples and simulation results are given finally to illustrate the effectiveness of the method.
AIR Tools - A MATLAB package of algebraic iterative reconstruction methods We present a MATLAB package with implementations of several algebraic iterative reconstruction methods for discretizations of inverse problems. These so-called row action methods rely on semi-convergence for achieving the necessary regularization of the problem. Two classes of methods are implemented: Algebraic Reconstruction Techniques (ART) and Simultaneous Iterative Reconstruction Techniques (SIRT). In addition we provide a few simplified test problems from medical and seismic tomography. For each iterative method, a number of strategies are available for choosing the relaxation parameter and the stopping rule. The relaxation parameter can be fixed, or chosen adaptively in each iteration; in the former case we provide a new ''training'' algorithm that finds the optimal parameter for a given test problem. The stopping rules provided are the discrepancy principle, the monotone error rule, and the NCP criterion; for the first two methods ''training'' can be used to find the optimal discrepancy parameter.
High-Voltage-Gain CMOS LNA For 6–8.5-GHz UWB Receivers The design of a fully integrated CMOS low noise amplifiers (LNA) for ultra-wide-band (UWB) integrated receivers is presented. An original LC input matching cell architecture enables fractional bandwidths of about 25%, with practical values, that match the new ECC 6-8.5-GHz UWB frequency band. An associated design method which allows low noise figure and high voltage gain is also presented. Measurements results on an LNA prototype fabricated in a 0.13-mum standard CMOS process show average voltage gain and noise figure of 29.5 and 4.5 dB, respectively.
Robust compensation of a chattering time-varying input delay We investigate the design of a prediction-based controller for a linear system subject to a time-varying input delay, not necessarily causal. This means that the information feeding the system can be older than ones previously received. We propose to use the current delay value in the prediction employed in the control law. Modeling the input delay as a transport Partial Differential Equation, we prove asymptotic tracking of the system state, providing that the average ℒ2-norm of the delay time-derivative is sufficiently small. This result is obtained by generalizing Halanay inequality to time-varying differential inequalities.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.04327
0.04
0.022422
0.013357
0.008969
0.004141
0.000647
0.00018
0.000002
0
0
0
0
0
Policy Iteration Approach to the Infinite Horizon Average Optimal Control of Probabilistic Boolean Networks. This article studies the optimal control of probabilistic Boolean control networks (PBCNs) with the infinite horizon average cost criterion. By resorting to the semitensor product (STP) of matrices, a nested optimality equation for the optimal control problem of PBCNs is proposed. The Laurent series expression technique and the Jordan decomposition method derive a novel policy iteration-type algor...
A mixed finite element method for a time-fractional fourth-order partial differential equation In this paper, a numerical theory based on the mixed finite element method for a time-fractional fourth-order partial differential equation (PDE) is presented and analyzed. An auxiliary variable @s=@Du is introduced, then the fourth-order equation can be split into the coupled system of two second-order equations. The time Caputo-fractional derivative is discretized by a finite difference method and the spatial direction is approximated by the mixed finite element method. The stabilities based on a priori analysis for two variables are discussed and some a priori error estimates in L^2-norm for the scalar unknown u and the variable @s=@Du, are derived, respectively. Moreover, an a priori error result in H^1-norm for the scalar unknown u also is proved. For verifying the theoretical analysis, a numerical test is made by using Matlab procedure.
On Decomposed Subspaces of Finite Games. This note provides the detailed description of the decomposed subspaces of finite games. First, the basis of potential games and the basis of non-strategic games are revealed. Then the bases of pure potential and pure harmonic subspaces are also obtained. These bases provide an explicit formula for the decomposition, and are convenient for investigating the properties of the corresponding subspaces. As an application, we consider the dynamics of networked evolutionary games (NEGs). Three problems are considered: 1) the dynamic equivalence of evolutionary games; 2) the dynamics of near potential games; and 3) the decomposition of NEGs.
Evolutionary game theoretic demand-side management and control for a class of networked smart grid. In this paper, a new demand-side management problem of networked smart grid is formulated and solved based on evolutionary game theory. The objective is to minimize the overall cost of the smart grid, where individual communities can switch between grid power and local power according to strategies of their neighbors. The distinctive feature of the proposed formulation is that, a small portion of the communities are cooperative, while others pursue their own benefits. This formulation can be categorized as control networked evolutionary game, which can be solved systematically by using semi-tensor product. A new binary optimal control algorithm is applied to optimize transient performances of the networked evolutionary game.
On Constructing Multiple Lyapunov Functions for Tracking Control of Multiple Agents with Switching Topologies Distributed consensus tracking for linear multiagent systems (MASs) with directed switching topologies and a dynamic leader is investigated in this paper. By fully considering the special feature of Laplacian matrices for topology candidates, several new classes of multiple Lyapunov functions (MLFs) are constructed in this paper for leader-following MASs with, respectively, an autonomous leader and a nonautonomous leader. Under the condition that each possible topology graph contains a spanning tree rooted at the leader node, some efficient criteria for achieving consensus tracking in the considered MASs are provided. Specifically, it is proven that consensus tracking in the closed-loop MASs can be ensured if the average dwell time for switching among different topologies is larger than a derived positive quantity and the control parameters in tracking protocols are appropriately designed. It is further theoretically shown that the present Lyapunov inequality based criteria for consensus tracking with an autonomous leader are much less conservative than the existing ones derived by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$M$</tex-math></inline-formula> -matrix theory. The results are then extended to the case where the topology graph only frequently contains a directed spanning tree as the MASs evolve over time. At last, numerical simulations are performed to illustrate the effectiveness of the analytical analysis and the advantages of the proposed MLFs.
Criteria for Observability and Reconstructibility of Boolean Control Networks via Set Controllability This brief focuses on observability and recontructibility of Boolean control networks (BCNs) via the semi-tensor product of matrices. To solve the problems, for a given BCN, an augmented BCN is constructed. By partitioning the state-space of the augmented BCN, observability and recontructibility of the given BCN are equivalently converted to set controllability of the augmented BCN. Using the set controllability approach, criteria for observability and recontructibility of BCNs are obtained respectively. Compared with the existing results, the criteria proposed in this brief are easy to check. The effectiveness of the results is verified both theoretically through mathematical derivations and practically by an illustrative example.
Event-Triggering Sampling Based Leader-Following Consensus in Second-Order Multi-Agent Systems In this note, the problem of second-order leaderfollowing consensus by a novel distributed event-triggered sampling scheme in which agents exchange information via a limited communication medium is studied. Event-based distributed sampling rules are designed, where each agent decides when to measure its own state value and requests its neighbor agents broadcast their state values across the network when a locallycomputed measurement error exceeds a state-dependent threshold. For the case of fixed topology, a necessary and sufficient condition is established. For the case of switching topology, a sufficient condition is obtained under the assumption that the time-varying directed graph is uniformly jointly connected. It is shown that the inter-event intervals are lower bounded by a strictly positive constant, which excludes the Zeno-behavior before the consensus is achieved. Numerical simulation examples are provided to demonstrate the correctness of theoretical results.
The GPU Computing Era GPU computing is at a tipping point, becoming more widely used in demanding consumer applications and high-performance computing. This article describes the rapid evolution of GPU architectures—from graphics processors to massively parallel many-core multiprocessors, recent developments in GPU computing architectures, and how the enthusiastic adoption of CPU+GPU coprocessing is accelerating parallel applications.
Searching in an unknown environment: an optimal randomized algorithm for the cow-path problem Searching for a goal is a central and extensively studied problem in computer science. In classical searching problems, the cost of a search function is simply the number of queries made to an oracle that knows the position of the goal. In many robotics problems, as well as in problems from other areas, we want to charge a cost proportional to the distance between queries (e.g., the time required to travel between two query points). With this cost function in mind, the abstract problem known as the w -lane cow-path problem was designed. There are known optimal deterministic algorithms for the cow-path problem; we give the first randomized algorithm in this paper. We show that our algorithm is optimal for two paths ( w =2) and give evidence that it is optimal for larger values of w . Subsequent to the preliminary version of this paper, Kao et al. ( in “Proceedings, 5th ACM–SIAM Symposium on Discrete Algorithm,” pp. 372–381, 1994) have shown that our algorithm is indeed optimal for all w ⩾2. Our randomized algorithm gives expected performance that is almost twice as good as is possible with a deterministic algorithm. For the performance of our algorithm, we also derive the asymptotic growth with respect to w —despite similar complexity results for related problems, it appears that this growth has never been analyzed.
Pinning adaptive synchronization of a general complex dynamical network There are two challenging fundamental questions in pinning control of complex networks: (i) How many nodes should a network with fixed network structure and coupling strength be pinned to reach network synchronization? (ii) How much coupling strength should a network with fixed network structure and pinning nodes be applied to realize network synchronization? To fix these two questions, we propose a general complex dynamical network model and then further investigate its pinning adaptive synchronization. Based on this model, we attain several novel adaptive synchronization criteria which indeed give the positive answers to these two questions. That is, we provide a simply approximate formula for estimating the detailed number of pinning nodes and the magnitude of the coupling strength for a given general complex dynamical network. Here, the coupling-configuration matrix and the inner-coupling matrix are not necessarily symmetric. Moreover, our pinning adaptive controllers are rather simple compared with some traditional controllers. A Barabási–Albert network example is finally given to show the effectiveness of the proposed synchronization criteria.
Enabling open-source cognitively-controlled collaboration among software-defined radio nodes Software-defined radios (SDRs) are now recognized as a key building block for future wireless communications. We have spent the past year enhancing existing open software to create a software-defined data radio. This radio extends the notion of software-defined behavior to higher layers in the protocol stack: most importantly through the media access layer. Our particular approach to the problem has been guided by the desire to allow fine-grained cognitive control of the radio. We describe our system, Adaptive Dynamic Radio Open-source Intelligent Team (ADROIT).
Modeling of software radio aspects by mapping of SDL and CORBA With the evolution of 3rd generation mobile communications standardization, the software radio concept has the potential to offer a pragmatic solution - a software implementation that allows the mobile terminal to adapt dynamically to its radio environment. The mapping of SDL and CORBA mechanisms is introduced, in order to provide a generic platform for the implementation of future mobile services, supporting standardized interfaces and manufacturer platform independent object and service functionality description. For the functional entity diagram model, it is proposed that the functional entities be designed as objects, the functional entities group as 'open' object oriented SDL platforms, and the interfaces between them as CORBA IDLs, communicating via the ORB in a generic implementation and location independent way. The functional entity groups are proposed to be modeled as SDL block types, while the functional entities and sub-entities as SDL process and service types. The objects interact with each other like client or server objects requesting or receiving services from other objects. Every object has a CORBA IDL interface, which allows every component to be distributed in an optimum way by providing a standardized infrastructure, ensuring interoperability, flexibility, reusability, transparency and management capabilities.
Quadrature Bandpass Sampling Rules for Single- and Multiband Communications and Satellite Navigation Receivers In this paper, we examine how existing rules for bandpass sampling rates can be applied to quadrature bandpass sampling. We find that there are significantly more allowable sampling rates and that the minimum rate can be reduced.
A Sub- $\mu$ W Reconfigurable Front-End for Invasive Neural Recording That Exploits the Spectral Characteristics of the Wideband Neural Signal This paper presents a sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> ac-coupled reconfigurable front-end for invasive wideband neural signal recording. The proposed topology embeds filtering capabilities enabling the selection of different frequency bands inside the neural signal spectrum. Power consumption is optimized by defining specific noise targets for each sub-band. These targets take into account the spectral characteristics of wideband neural signals: local field potentials (LFP) exhibit <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mathrm {1/f^{x}}$ </tex-math></inline-formula> magnitude scaling while action potentials (AP) show uniform magnitude across frequency. Additionally, noise targets also consider electrode noise and the spectral distribution of noise sources in the circuit. An experimentally verified prototype designed in a standard 180 nm CMOS process draws 815 nW from a 1 V supply. The front-end is able to select among four different frequency bands (modes) up to 5 kHz. The measured input-referred spot-noise at 500 Hz in the LFP mode (1 Hz - 700 Hz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$55~nV/\sqrt {Hz}$ </tex-math></inline-formula> while the integrated noise in the AP mode (200 Hz - 5 kHz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$4.1~\mu Vrms$ </tex-math></inline-formula> . The proposed front-end achieves sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> operation without penalizing other specifications such as input swing, common-mode or power-supply rejection ratios. It reduces the power consumption of neural front-ends with spectral selectivity by <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$6.1\times $ </tex-math></inline-formula> and, compared with conventional wideband front-ends, it obtains a reduction of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.5\times $ </tex-math></inline-formula> .
1.1
0.1
0.1
0.1
0.1
0.1
0.014286
0
0
0
0
0
0
0
Wideband Low Noise Variable Gain Amplifier A low noise variable gain amplifier (LNVGA) is fully designed for operation over a wideband. Since a low noise figure (NF) and a high 1 dB compression point (P1dB), i.e. large dynamic range, is difficult to achieve in CMOS technology, gain controllability is exploited to increase the overall system dynamic range. The LNVGA is composed by a low noise amplifier (LNA) stage and a voltage variable attenuator (VVA) stage. The former aims to keep the NF low, and the latter aims to provide a very large gain variation. Also, an output buffer is used to allow for measurement with 50 Ω probes. In addition, a novel Active Balun topology is proposed which achieves competitive results for magnitude imbalance and phase imbalance. The LNVGA simulation results show a gain control range of 47.7 dB, its voltage gain varies from -26.7 dB to 21 dB, the minimum NF is 3.43 dB, the IIP3 is -4.6 dBm, and a band of operation from 200 MHz to 3.5 GHz.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A highly sensitive fiber optic sensor based on two-core fiber for refractive index measurement. A simple and compact fiber optic sensor based on a two-core fiber is demonstrated for high-performance measurements of refractive indices (RI) of liquids. In order to demonstrate the suitability of the proposed sensor to perform high-sensitivity sensing in a variety of applications, the sensor has been used to measure the RI of binary liquid mixtures. Such measurements can accurately determine the salinity of salt water solutions, and detect the water content of adulterated alcoholic beverages. The largest sensitivity of the RI sensor that has been experimentally demonstrated is 3,119 nm per Refractive Index Units (RIU) for the RI range from 1.3160 to 1.3943. On the other hand, our results suggest that the sensitivity can be enhanced up to 3485.67 nm/RIU approximately for the same RI range.
Recent progress in distributed fiber optic sensors. Rayleigh, Brillouin and Raman scatterings in fibers result from the interaction of photons with local material characteristic features like density, temperature and strain. For example an acoustic/mechanical wave generates a dynamic density variation; such a variation may be affected by local temperature, strain, vibration and birefringence. By detecting changes in the amplitude, frequency and phase of light scattered along a fiber, one can realize a distributed fiber sensor for measuring localized temperature, strain, vibration and birefringence over lengths ranging from meters to one hundred kilometers. Such a measurement can be made in the time domain or frequency domain to resolve location information. With coherent detection of the scattered light one can observe changes in birefringence and beat length for fibers and devices. The progress on state of the art technology for sensing performance, in terms of spatial resolution and limitations on sensing length is reviewed. These distributed sensors can be used for disaster prevention in the civil structural monitoring of pipelines, bridges, dams and railroads. A sensor with centimeter spatial resolution and high precision measurement of temperature, strain, vibration and birefringence can find applications in aerospace smart structures, material processing, and the characterization of optical materials and devices.
Microfiber optical sensors: a review. With diameter close to or below the wavelength of guided light and high index contrast between the fiber core and the surrounding, an optical microfiber shows a variety of interesting waveguiding properties, including widely tailorable optical confinement, evanescent fields and waveguide dispersion. Among various microfiber applications, optical sensing has been attracting increasing research interest due to its possibilities of realizing miniaturized fiber optic sensors with small footprint, high sensitivity, fast response, high flexibility and low optical power consumption. Here we review recent progress in microfiber optical sensors regarding their fabrication, waveguide properties and sensing applications. Typical microfiber-based sensing structures, including biconical tapers, optical gratings, circular cavities, Mach-Zehnder interferometers and functionally coated/doped microfibers, are summarized. Categorized by sensing structures, microfiber optical sensors for refractive index, concentration, temperature, humidity, strain and current measurement in gas or liquid environments are reviewed. Finally, we conclude with an outlook for challenges and opportunities of microfiber optical sensors.
An Exposed-Core Grapefruit Fibers Based Surface Plasmon Resonance Sensor To solve the problem of air hole coating and analyte filling in microstructured optical fiber-based surface plasmon resonance (SPR) sensors, we designed an exposed-core grapefruit fiber (EC-GFs)-based SPR sensor. The exposed section of the EC-GF is coated with a SPR, supporting thin silver film, which can sense the analyte in the external environment. The asymmetrically coated fiber can support two separate resonance peaks (x- and y-polarized peaks) with orthogonal polarizations and x-polarized peak, providing a much higher peak loss than y-polarized, also the x-polarized peak has higher wavelength and amplitude sensitivities. A large analyte refractive index (RI) range from 1.33 to 1.42 is calculated to investigate the sensing performance of the sensor, and an extremely high wavelength sensitivity of 13,500 nm/refractive index unit (RIU) is obtained. The silver layer thickness, which may affect the sensing performance, is also discussed. This work can provide a reference for developing a high sensitivity, real-time, fast-response, and distributed SPR RI sensor.
Fiber optic sensor for acoustic detection of partial discharges in oil-paper insulated electrical systems. A fiber optic interferometric sensor with an intrinsic transducer along a length of the fiber is presented for ultrasound measurements of the acoustic emission from partial discharges inside oil-filled power apparatus. The sensor is designed for high sensitivity measurements in a harsh electromagnetic field environment, with wide temperature changes and immersion in oil. It allows enough sensitivity for the application, for which the acoustic pressure is in the range of units of Pa at a frequency of 150 kHz. In addition, the accessibility to the sensing region is guaranteed by immune fiber-optic cables and the optical phase sensor output. The sensor design is a compact and rugged coil of fiber. In addition to a complete calibration, the in-situ results show that two types of partial discharges are measured through their acoustic emissions with the sensor immersed in oil.
Photonic crystal fiber Mach-Zehnder interferometer for refractive index sensing. We report on a refractive index sensor using a photonic crystal fiber (PCF) interferometer which was realized by fusion splicing a short section of PCF (Blaze Photonics, LMA-10) between two standard single mode fibers. The fully collapsed air holes of the PCF at the spice regions allow the coupling of PCF core and cladding modes that makes a Mach-Zehnder interferometer. The transmission spectrum exhibits sinusoidal interference pattern which shifts differently when the cladding/core surface of the PCF is immersed with different RI of the surrounding medium. Experimental results using wavelength-shift interrogation for sensing different concentrations of sucrose solution show that a resolution of 1.62 x 10(-4)-8.88 x 10(-4) RIU or 1.02 x 10(-4)-9.04 x 10(-4) RIU (sensing length for 3.50 or 5.00 cm, respectively) was achieved for refractive indices in the range of 1.333 to 1.422, suggesting that the PCF interferometer are attractive for chemical, biological, biochemical sensing with aqueous solutions, as well as for civil engineering and environmental monitoring applications.
Tapestry: a resilient global-scale overlay for service deployment We present Tapestry, a peer-to-peer overlay routing infrastructure offering efficient, scalable, location-independent routing of messages directly to nearby copies of an object or service using only localized resources. Tapestry supports a generic decentralized object location and routing applications programming interface using a self-repairing, soft-state-based routing layer. The paper presents the Tapestry architecture, algorithms, and implementation. It explores the behavior of a Tapestry deployment on PlanetLab, a global testbed of approximately 100 machines. Experimental results show that Tapestry exhibits stable behavior and performance as an overlay, despite the instability of the underlying network layers. Several widely distributed applications have been implemented on Tapestry, illustrating its utility as a deployment infrastructure.
A Low-Power Fast-Transient 90-nm Low-Dropout Regulator With Multiple Small-Gain Stages A power-efficient 90-nm low-dropout regulator (LDO) with multiple small-gain stages is proposed in this paper. The proposed channel-resistance-insensitive small-gain stages provide loop gain enhancements without introducing low-frequency poles before the unity-gain frequency (UGF). As a result, both the loop gain and bandwidth of the LDO are improved, so that the accuracy and response speed of voltage regulation are significantly enhanced. As no on-chip compensation capacitor is required, the active chip area of the LDO is only 72.5 μm × 37.8 μm. Experimental results show that the LDO is capable of providing an output of 0.9 V with maximum output current of 50 mA from a 1-V supply. The LDO has a quiescent current of 9.3 μA, and has significantly improvement in line and load transient responses as well as performance in power-supply rejection ratio (PSRR).
Energy-Efficient Communication Protocol for Wireless Microsensor Networks Wireless distributed micro-sensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks.Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multihop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster base stations (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic net-works, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show that LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional routing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.
Type-2 Fuzzy Sets and Systems: An Overview [corrected reprint] As originally published in the February 2007 issue of IEEE Computational Intelligence Magazine, the above titled paper (ibid., vol. 2, no. 1, pp. 20-29, Feb 07) contained errors in mathematics that were introduced by the publisher. The corrected version is reprinted in its entirety.
Codejail: Application-Transparent Isolation of Libraries with Tight Program Interactions.
Opportunistic Information Dissemination in Mobile Ad-hoc Networks: The Profit of Global Synchrony The topic of this paper is the study of Information Dissemination in Mobile Ad-hoc Networks by means of deterministic protocols. We characterize the connectivity resulting from the movement, from failures and from the fact that nodes may join the computation at different times with two values, � and �, so that, withintime slots, some node that has the information must be connected to some node without it for at leasttime slots. The protocols studied are clas- sified into three classes: oblivious (the transmission schedule of a node is only a function of its ID), quasi-oblivious (the transmission schedule may also depend on a global time), and adaptive. The main contribution of this work concerns negative results. Contrasting the lower and upper bounds derived, interesting complexity gaps among protocol- classes are observed. More precisely, in order to guarantee any progress towards solving the problem, it is shown thatmust be at least n 1 in general, but that � 2 (n 2 /log n) if an oblivious protocol is used. Since quasi-oblivious protocols can guarantee progress with � 2 O(n), this represents a significant gap, almost linear in �, between oblivious and quasi-oblivious protocols. Regarding the time to complete the dissemination, a lower bound of (n� + n 3 /log n) is proved for oblivious protocols, which is tight up to a polylogarithmic factor because a constructive O(n� + n 3 log n) upper bound exists for the same class. It is also proved that adaptive protocols require (n� + n 2 ), which is optimal given that a matching upper bound can be proved for quasi-oblivious protocols. These results show that the gap in time complexity between oblivious and quasi- oblivious, and hence adaptive, protocols is almost linear. This gap is what we call the profit of global synchrony, since it represents the gain the network obtains from global synchrony with respect to not having it.
Implementation of LTE SC-FDMA on the USRP2 software defined radio platform In this paper we discuss the implementation of a Single Carrier Frequency Division Multiple Access (SC-FDMA) transceiver running over the Universal Software Radio Peripheral 2 (USRP2). SC-FDMA is the air interface which has been selected for the uplink in the latest Long Term Evolution (LTE) standard. In this paper we derive an AWGN channel model for SC-FDMA transmission, which is useful for benchmarking experimental results. In our implementation, we deal with signal scaling, equalization and partial synchronization to realize SC-FDMA transmission over a noisy channel at rates up to 5.184 Mbit/s. Experimental results on the Bit Error Rate (BER) versus Signal-to-Noise Ratio (SNR) are presented and compared to theoretical and simulated performance.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.115194
0.105887
0.105887
0.105887
0.01235
0.000291
0
0
0
0
0
0
0
0
Model-Based Design for Software Defined Radio on an FPGA. This paper presents an approach of model-based design for implementing a digital communication system on a field programmable gate array (FPGA) for a software defined radio (SDR). SDR is a popular prototyping platform for wireless communication systems due to its flexibility and utility. A traditional SDR system performs nearly all computations and signal processing tasks on the host computer, and then sends the waveform to the RF front end. For complex algorithms or high data rate, the host computer becomes the processing bottleneck and FPGA is often employed as a hardware accelerator. This paper demonstrates the procedure of using model-based design for SDR targeted on FPGA hardware. A complete digital communication system, including a transmitter with convolutional encoder and a receiver with Viterbi decoder, is implemented on an FPGA-based SDR platform and validated by over-the-air demonstration. Synchronization algorithms, such as carrier frequency offset, phase offset, and time recovery, are also optimized for hardware efficiency.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A 32-Gb/s PAM-4 Quarter-Rate Clock and Data Recovery Circuit With an Input Slew-Rate Tolerant Selective Transition Detector We present a 32-Gb/s PAM-4 quarter-rate clock and data recovery (CDR) circuit having a newly proposed selective transition detector (STD). The STD allows phase detection of PAM-4 data in a simple manner by eliminating middle transition and majority voting with simple logic gates. In addition, using the edge-rotating technique with quarter-rate CDR operation, our CDR achieves power consumption and chip area reduction. A prototype 32-Gb/s quarter-rate PAM-4 CDR circuit is realized with 28-nm CMOS technology. The CDR circuit consumes 32 mW with 1.2-V supply and the recovered clock signal has 0.0136-UI rms jitter.
Current-Mode Triline Transceiver for Coded Differential Signaling Across On-Chip Global Interconnects. This paper presents a current-mode triline ternarylevel coded differential signaling scheme for high-speed data transmission across on-chip global interconnects. An energy efficient current-mode triline transceiver pair suitable for this signaling scheme has been proposed. Compared with a voltage mode receiver with resistive termination, the proposed active terminated current-mode receiver reduces...
Area and Power Efficient 10B6Q PAM-4 DC Balance Coder for Automotive Camera Link We propose an area- and power-efficient four-level pulse amplitude modulation (PAM-4) encoder/decoder for an AC coupled link system that guarantees DC balance and limited run length. Configured as 10B6Q, input data of 10 bits are mapped to 5 quaternary symbols. One of four candidate codewords that have inverted symbols at predefined positions is selected that best satisfies the requirements of DC balance and the number of transitions. One quaternary symbol is added at the end to indicate which candidate codeword is selected. This coding scheme not only guarantees the PAM-4 DC balance but also increases transition density from 75% (PRBS) to 85.6% and limits the maximum run length down to six, thus facilitating timing recovery of the receiver. Although the input data width of 10 bits is used in this implementation, the proposed scheme can be extended to wider input data with higher coding efficiency. The proposed 10B6Q encoder is fabricated in the 40 nm CMOS technology and occupies an active area of 0.0009 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> with a synthesized gate count of 645. It consumes 0.23 mW at the operating clock frequency of 667 MHz.
A 1.02-pJ/b 20.83-Gb/s/Wire USR Transceiver Using CNRZ-5 in 16-nm FinFET. An energy-efficient (1.02 pJ/b) and high-speed (20.83 Gb/s/wire, 417 Gb/s/mm) link for ultra-short reach (USR) applications (up to 6-dB channel loss at the Nyquist frequency of 12.5 GHz) is presented. Correlated non-return to zero (CNRZ) signaling with low sensitivity to inter-symbol interference (ISI) has been developed to improve the link budget. In addition to high pin efficiency (5b6w: 5 bits ...
A Single-Ended Parallel Transceiver With Four-Bit Four-Wire Four-Level Balanced Coding for the Point-to-Point DRAM Interface. A four-bit four-wire four-level (4B4W4L) single-ended parallel transceiver for the point-to-point DRAM interface achieved a peak reduction of ~10 dB in the electromagnetic interference (EMI) H-field power, compared to a conventional 4-bit parallel binary transceiver with the same output driver power of transmitter (TX) and the same input voltage margin of receiver (RX). A four-level balanced codin...
A 0.14-to-0.29-pJ/bit 14-GBaud/s Trimodal (NRZ/PAM-4/PAM-8) Half-Rate Bang-Bang Clock and Data Recovery (BBCDR) Circuit in 28-nm CMOS This paper reports a half-rate bang-bang clock and data recovery (BBCDR) circuit supporting the trimodal (NRZ/PAM-4/PAM-8) operation. The observation of their crossover- points distribution at the transitions introduces the single-loop phase tracking technique. In addition, low-power techniques at both the architecture and circuit levels are employed to greatly improve the overall energy efficiency and multiply data throughput by increasing the number of levels on the magnitude. Fabricated in 28-nm CMOS, our BBCDR prototype scores a 0.29/0.17/0.14 pJ/bit efficiency at 14.4/28.8/43.2 Gb/s under NRZ/PAM-4/PAM-8 modes, respectively. The jitter is <; 0.53 ps (integrated from 100 Hz to 1 GHz) with approximately-equivalent constant loop bandwidth, and we achieve at least 1-UIpp jitter tolerance up to 10 MHz for all the three modes.
Tapestry: a resilient global-scale overlay for service deployment We present Tapestry, a peer-to-peer overlay routing infrastructure offering efficient, scalable, location-independent routing of messages directly to nearby copies of an object or service using only localized resources. Tapestry supports a generic decentralized object location and routing applications programming interface using a self-repairing, soft-state-based routing layer. The paper presents the Tapestry architecture, algorithms, and implementation. It explores the behavior of a Tapestry deployment on PlanetLab, a global testbed of approximately 100 machines. Experimental results show that Tapestry exhibits stable behavior and performance as an overlay, despite the instability of the underlying network layers. Several widely distributed applications have been implemented on Tapestry, illustrating its utility as a deployment infrastructure.
A Low-Power Fast-Transient 90-nm Low-Dropout Regulator With Multiple Small-Gain Stages A power-efficient 90-nm low-dropout regulator (LDO) with multiple small-gain stages is proposed in this paper. The proposed channel-resistance-insensitive small-gain stages provide loop gain enhancements without introducing low-frequency poles before the unity-gain frequency (UGF). As a result, both the loop gain and bandwidth of the LDO are improved, so that the accuracy and response speed of voltage regulation are significantly enhanced. As no on-chip compensation capacitor is required, the active chip area of the LDO is only 72.5 μm × 37.8 μm. Experimental results show that the LDO is capable of providing an output of 0.9 V with maximum output current of 50 mA from a 1-V supply. The LDO has a quiescent current of 9.3 μA, and has significantly improvement in line and load transient responses as well as performance in power-supply rejection ratio (PSRR).
Energy-Efficient Communication Protocol for Wireless Microsensor Networks Wireless distributed micro-sensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks.Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multihop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster base stations (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic net-works, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show that LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional routing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.
Type-2 Fuzzy Sets and Systems: An Overview [corrected reprint] As originally published in the February 2007 issue of IEEE Computational Intelligence Magazine, the above titled paper (ibid., vol. 2, no. 1, pp. 20-29, Feb 07) contained errors in mathematics that were introduced by the publisher. The corrected version is reprinted in its entirety.
Codejail: Application-Transparent Isolation of Libraries with Tight Program Interactions.
Distributed Primal-Dual Subgradient Method for Multiagent Optimization via Consensus Algorithms. This paper studies the problem of optimizing the sum of multiple agents' local convex objective functions, subject to global convex inequality constraints and a convex state constraint set over a network. Through characterizing the primal and dual optimal solutions as the saddle points of the Lagrangian function associated with the problem, we propose a distributed algorithm, named the distributed primal-dual subgradient method, to provide approximate saddle points of the Lagrangian function, based on the distributed average consensus algorithms. Under Slater's condition, we obtain bounds on the convergence properties of the proposed method for a constant step size. Simulation examples are provided to demonstrate the effectiveness of the proposed method.
Towards elastic SDR architectures using dynamic task management. SDR platforms integrating several types and numbers of processing elements in System-on-Chips become an attractive solution for baseband processing in wireless systems. In order to cope with the diversity of protocol applications and the heterogeneity of multi-core architectures, a hierarchical approach for workload distribution is proposed in this paper. Specifically, a system-level scheduler is employed to map applications to multiple processing clusters, complemented with a cluster-level scheduler - the CoreManager - for dynamic resource allocation and configuration as well as for task and data scheduling. A performance analysis of the proposed approach is presented, which shows the advantages of dynamic scheduling against a static approach for variable workloads in the LTE-Advanced uplink multi-user scenarios.
An Energy-Efficient SAR ADC With Event-Triggered Error Correction This brief presents an energy-efficient fully differential 10-bit successive approximation register (SAR) analog-to-digital converter (ADC) with a resolution of 10 bits and a sampling rate of 320 kS/s. The optimal capacitor split and bypass number is analyzed to achieve the highest switching energy efficiency. The common-mode voltage level remains constant during the MSB-capacitor switching cycles. To minimize nonlinearity due to charge averaging voltage offset or DAC array mismatch, an event-triggered error correction method is employed as a redundant cycle for detecting digital code errors within 1 least significant bit (LSB). A test chip was fabricated using the 180-nm CMOS process and occupied a 0.0564-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> core area. Under a regular 0.65-V supply voltage, the ADC achieved an effective number of bits of 9.61 bits and a figure of merit (FOM) of 6.38 fJ/conversion step, with 1.6- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${ \mu }\text{W}$ </tex-math></inline-formula> power dissipation for a low-frequency input. The measured differential and integral nonlinearity results are within 0.30 LSB and 0.43 LSB, respectively.
1.2
0.2
0.2
0.2
0.1
0.05
0
0
0
0
0
0
0
0
Advances and Opportunities in Machine Learning for Process Data Analytics In this paper we introduce the current thrust of development in machine learning and artificial intelligence, fueled by advances in statistical learning theory over the last 20 years and commercial successes by leading big data companies. Then we discuss the characteristics of process manufacturing systems and briefly review the data analytics research and development in the last three decades. We give three attributes for process data analytics to make machine learning techniques applicable in the process industries. Next we provide a perspective on the currently active topics in machine learning that could be opportunities for process data analytics research and development. Finally we address the importance of a data analytics culture. Issues discussed range from technology development to workforce education and from government initiatives to curriculum enhancement.
Planning as heuristic search In the AIPS98 Planning Contest, the hsp planner showed that heuristic search planners can be competitive with state-of-the-art Graphplan and sat planners. Heuristic search planners like hsp transform planning problems into problems of heuristic search by automatically extracting heuristics from Strips encodings. They differ from specialized problem solvers such as those developed for the 24-Puzzle and Rubik's Cube in that they use a general declarative language for stating problems and a general mechanism for extracting heuristics from these representations. In this paper, we study a family of heuristic search planners that are based on a simple and general heuristic that assumes that action preconditions are independent. The heuristic is then used in the context of best-first and hill-climbing search algorithms, and is tested over a large collection of domains. We then consider variations and extensions such as reversing the direction of the search for speeding node evaluation, and extracting information about propositional invariants for avoiding dead-ends. We analyze the resulting planners, evaluate their performance, and explain when they do best. We also compare the performance of these planners with two state-of-the-art planners, and show that the simplest planner based on a pure best-first search yields the most solid performance over a large set of problems. We also discuss the strengths and limitations of this approach, establish a correspondence between heuristic search planning and Graphplan, and briefly survey recent ideas that can reduce the current gap in performance between general heuristic search planners and specialized solvers.
Solving the find-path problem by good representation of free space Free space is represented as a union of (possibly overlapping) generalized cones. An algorithm is presented which efficiently finds good collision-free paths for convex polygonal bodies through space littered with obstacle polygons. The paths are good in the sense that the distance of closest approach to an obstacle over the path is usually far from minimal over the class of topologically equivalent collision-free paths. The algorithm is based on characterizing the volume swept by a body as it is translated and rotated as a generalized cone, and determining under what conditions one generalized cone is a subset of another.
Optimal Path Planning Generation for Mobile Robots using Parallel Evolutionary Artificial Potential Field In this paper, we introduce the concept of Parallel Evolutionary Artificial Potential Field (PEAPF) as a new method for path planning in mobile robot navigation. The main contribution of this proposal is that it makes possible controllability in complex real-world sceneries with dynamic obstacles if a reachable configuration set exists. The PEAPF outperforms the Evolutionary Artificial Potential Field (EAPF) proposal, which can also obtain optimal solutions but its processing times might be prohibitive in complex real-world situations. Contrary to the original Artificial Potential Field (APF) method, which cannot guarantee controllability in dynamic environments, this innovative proposal integrates the original APF, evolutionary computation and parallel computation for taking advantages of novel processors architectures, to obtain a flexible path planning navigation method that takes all the advantages of using the APF and the EAPF, strongly reducing their disadvantages. We show comparative experiments of the PEAPF against the APF and the EAPF original methods. The results demonstrate that this proposal overcomes both methods of implementation; making the PEAPF suitable to be used in real-time applications.
Deep Reinforcement Learning For Safe Local Planning Of A Ground Vehicle In Unknown Rough Terrain Safe unmanned ground vehicle navigation in unknown rough terrain is crucial for various tasks such as exploration, search and rescue and agriculture. Offline global planning is often not possible when operating in harsh, unknown environments, and therefore, online local planning must be used. Most online rough terrain local planners require heavy computational resources, used for optimal trajectory searching and estimating vehicle orientation in positions within the range of the sensors. In this work, we present a deep reinforcement learning approach for local planning in unknown rough terrain with zero-range to local-range sensing, achieving superior results compared to potential fields or local motion planning search spaces methods. Our approach includes reward shaping which provides a dense reward signal. We incorporate self-attention modules into our deep reinforcement learning architecture in order to increase the explainability of the learnt policy. The attention modules provide insight regarding the relative importance of sensed inputs during training and planning. We extend and validate our approach in a dynamic simulation, demonstrating successful safe local planning in environments with a continuous terrain and a variety of discrete obstacles. By adding the geometric transformation between two successive timesteps and the corresponding action as inputs, our architecture is able to navigate on surfaces with different levels of friction. Reinforcement learning, autonomous vehicle navigation, motion and path planning.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Cognitive radio: brain-empowered wireless communications Cognitive radio is viewed as a novel approach for improving the utilization of a precious natural resource: the radio electromagnetic spectrum. The cognitive radio, built on a software-defined radio, is defined as an intelligent wireless communication system that is aware of its environment and uses the methodology of understanding-by-building to learn from the environment and adapt to statistical variations in the input stimuli, with two primary objectives in mind: · highly reliable communication whenever and wherever needed; · efficient utilization of the radio spectrum. Following the discussion of interference temperature as a new metric for the quantification and management of interference, the paper addresses three fundamental cognitive tasks. 1) Radio-scene analysis. 2) Channel-state estimation and predictive modeling. 3) Transmit-power control and dynamic spectrum management. This work also discusses the emergent behavior of cognitive radio.
ImageNet Large Scale Visual Recognition Challenge. The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.
The geometry of innocent flesh on the bone: return-into-libc without function calls (on the x86) We present new techniques that allow a return-into-libc attack to be mounted on x86 executables that calls no functions at all. Our attack combines a large number of short instruction sequences to build gadgets that allow arbitrary computation. We show how to discover such instruction sequences by means of static analysis. We make use, in an essential way, of the properties of the x86 instruction set.
A world survey of artificial brain projects, Part I: Large-scale brain simulations Driven by rapid ongoing advances in computer hardware, neuroscience and computer science, Artificial Brain research and development are blossoming. This article constitutes the first half of a two-part world survey of artificial brain projects: this part dealing with large-scale brain simulations, and the second part with biologically inspired cognitive architectures (BICAs). The large-scale brain simulations we consider in depth here include those by Markram, Modha, Boahen, Horwitz, Edelman, Izhikevich, and Just. As well as reviewing the particulars of these simulation projects, we position them in a broader perspective, comparing at the different underlying definitions of the concept of ''simulation,'' noting that in many ways the projects are modeling neurosystems at different levels as well as using different methodologies.
H∞ control for sampled-data nonlinear systems described by Takagi–Sugeno fuzzy systems In this paper we consider the design problem of output feedback H∞ controllers for sampled-data fuzzy systems. We first transfer them into equivalent jump fuzzy systems. We establish the so-called Bounded Real Lemma for jump fuzzy systems and give a design method of γ-suboptimal output feedback H∞ controllers in terms of two Riccati inequalities with jumps. We then apply the main results to the sampled-data fuzzy systems and obtain a design method of γ-suboptimal output feedback H∞ controllers. We give a numerical example and construct a γ-suboptimal output feedback H∞ controller.
The accelerator store: A shared memory framework for accelerator-based systems This paper presents the many-accelerator architecture, a design approach combining the scalability of homogeneous multi-core architectures and system-on-chip's high performance and power-efficient hardware accelerators. In preparation for systems containing tens or hundreds of accelerators, we characterize a diverse pool of accelerators and find each contains significant amounts of SRAM memory (up to 90&percnt; of their area). We take advantage of this discovery and introduce the accelerator store, a scalable architectural component to minimize accelerator area by sharing its memories between accelerators. We evaluate the accelerator store for two applications and find significant system area reductions (30&percnt;) in exchange for small overheads (2&percnt; performance, 0&percnt;--8&percnt; energy). The paper also identifies new research directions enabled by the accelerator store and the many-accelerator architecture.
3.4 A 36Gb/s PAM4 transmitter using an 8b 18GS/S DAC in 28nm CMOS At data rates beyond 10Gb/s, most wireline links employ NRZ signaling. Serial NRZ links as high as 56Gb/s and 60Gb/s have been reported [1]. Nevertheless, as the rate increases, the constraints imposed by the channel, package, and die become more severe and do not benefit from process scaling in the same fashion that circuit design does. Reflections from impedance discontinuities in the PCB and package caused by vias and connectors introduce significant signal loss and distortions at higher frequencies. Even with an ideal channel, at every package-die interface, there is an intrinsic parasitic capacitance due to the pads and the ESD circuit amounting to at least 150fF, and a 50Ω resistor termination at both the transmit and receive ends resulting in an intrinsic pole at 23GHz or lower. In light of all these limitations, serial NRZ signaling beyond 60Gb/s appears suboptimal in terms of both power and performance. Utilizing various modulation techniques such as PAM4, one can achieve a higher spectral efficiency [2]. To enable such transmission formats, high-speed moderate-resolution data converters are required. This paper describes a 36Gb/s transmitter based on an 18GS/s 8b DAC implemented in 28nm CMOS, compliant to the new IEEE802.3bj standard for 100G Ethernet over backplane and copper cables [3].
Power Efficiency Comparison of Event-Driven and Fixed-Rate Signal Conversion and Compression for Biomedical Applications Energy-constrained biomedical recording systems need power-efficient data converters and good signal compression in order to meet the stringent power consumption requirements of many applications. In literature today, typically a SAR ADC in combination with digital compression is used. Recently, alternative event-driven sampling techniques have been proposed that incorporate compression in the ADC, such as level-crossing A/D conversion. This paper describes the power efficiency analysis of such level-crossing ADC (LCADC) and the traditional fixed-rate SAR ADC with simple compression. A model for the power consumption of the LCADC is derived, which is then compared to the power consumption of the SAR ADC with zero-order hold (ZOH) compression for multiple biosignals (ECG, EMG, EEG, and EAP). The LCADC is more power efficient than the SAR ADC up to a cross-over point in quantizer resolution (for example 8 bits for an EEG signal). This cross-over point decreases with the ratio of the maximum to average slope in the signal of the application. It also changes with the technology and design techniques used. The LCADC is thus suited for low to medium resolution applications. In addition, the event-driven operation of an LCADC results in fewer data to be transmitted in a system application. The event-driven LCADC without timer and with single-bit quantizer achieves a reduction in power consumption at system level of two orders of magnitude, an order of magnitude better than the SAR ADC with ZOH compression. At system level, the LCADC thus offers a big advantage over the SAR ADC.
1.24
0.24
0.24
0.24
0.24
0
0
0
0
0
0
0
0
0
A 65 nm 73 kb SRAM-Based Computing-In-Memory Macro With Dynamic-Sparsity Controlling For neural network (NN) applications at the edge of AI, computing-in-memory (CIM) demonstrates promising energy efficiency. However, when the network size grows while fulfilling the accuracy requirements of increasingly complicated application scenarios, significant memory consumption becomes an issue. Model pruning is a typical compression approach for solving this problem, but it does not fully exploit the energy efficiency advantage of conventional CIMs, because of the dynamic distribution of sparse weights and the increased data movement energy consumption of reading sparsity indexes from outside the chip. Therefore, we propose a vector-wise dynamic-sparsity controlling and computing in-memory structure (DS-CIM) that accomplishes both sparsity control and computation of weights in SRAM, to improve the energy efficiency of the vector-wise sparse pruning model. Implemented in a 65 nm CMOS process, the measurement results show that the proposed DS-CIM macro can save up to 50.4% of computational energy consumption, while ensuring the accuracy of vector-wise pruning models. The test chip can also achieve 87.88% accuracy on the CIFAR-10 dataset at 4-bit precision in inputs and weights, and it achieves 530.2TOPS/W (normalized to 1 bit) energy efficiency.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
BOGO: Buy Spatial Memory Safety, Get Temporal Memory Safety (Almost) Free A memory safety violation occurs when a program has an out-of-bound (spatial safety) or use-after-free (temporal safety) memory access. Given its importance as a security vulnerability, recent Intel processors support hardware-accelerated bound checks, called Memory Protection Extensions (MPX). Unfortunately, MPX provides no temporal safety. This paper presents BOGO, a lightweight full memory safety enforcement scheme that transparently guarantees temporal safety on top of MPX's spatial safety. Instead of tracking separate metadata for temporal safety, BOGO reuses the bounds metadata maintained by MPX for both spatial and temporal safety. On free, BOGO scans the MPX bound tables to invalidate the bound of dangling pointers; any following use-after-free error can be detected by MPX as an out-of-bound error. Since scanning the entire MPX bound tables could be expensive, BOGO tracks a small set of hot MPX bound table pages to check on free, and relies on the page fault mechanism to detect any potentially missing dangling pointer, ensuring sound temporal safety protection. Our evaluation shows that BOGO provides full memory safety at 60% runtime overhead and at 36% memory overhead for SPEC CPU 2006 benchmarks. We also show that BOGO incurs reasonable 2.7x slowdown for the worst-case malloc-free intensive benchmarks; and moderate 1.34x overhead for real-world applications.
Data Space Randomization Over the past several years, US-CERT advisories, as well as most critical updates from software vendors, have been due to memory corruption vulnerabilities such as buffer overflows, heap overflows, etc. Several techniques have been developed to defend against the exploitation of these vulnerabilities, with the most promising defenses being based on randomization. Two randomization techniques have been explored so far: address space randomization (ASR) that randomizes the location of objects in virtual memory, and instruction set randomization (ISR) that randomizes the representation of code. We explore a third form of randomization called data space randomization (DSR) that randomizes the representation of data stored in program memory. Unlike ISR, DSR is effective against non-control data attacks as well as code injection attacks. Unlike ASR, it can protect against corruption of non-pointer data as well as pointer-valued data. Moreover, DSR provides a much higher range of randomization (typically 232 for 32-bit data) as compared to ASR. Other interesting aspects of DSR include (a) it does not share a weakness common to randomization-based defenses, namely, susceptibility to information leakage attacks, and (b) it is capable of detecting some exploits that are missed by full bounds-checking techniques, e.g., some of the overflows from one field of a structure to the next field. Our implementation results show that with appropriate design choices, DSR can achieve a performance overhead in the range of 5% to 30% for a range of programs.
Precise garbage collection for C Magpie is a source-to-source transformation for C programs that enables precise garbage collection, where precise means that integers are not confused with pointers, and the liveness of a pointer is apparent at the source level. Precise GC is primarily useful for long-running programs and programs that interact with untrusted components. In particular, we have successfully deployed precise GC in the C implementation of a language run-time system that was originally designed to use conservative GC. We also report on our experience in transforming parts of the Linux kernel to use precise GC instead of manual memory management.
SVF: interprocedural static value-flow analysis in LLVM. This paper presents SVF, a tool that enables scalable and precise interprocedural Static Value-Flow analysis for C programs by leveraging recent advances in sparse analysis. SVF, which is fully implemented in LLVM, allows value-flow construction and pointer analysis to be performed in an iterative manner, thereby providing increasingly improved precision for both. SVF accepts points- to information generated by any pointer analysis (e.g., Andersen’s analysis) and constructs an interprocedural memory SSA form, in which the def-use chains of both top-level and address-taken variables are captured. Such value-flows can be subsequently exploited to support various forms of program analysis or enable more precise pointer analysis (e.g., flow-sensitive analysis) to be performed sparsely. By dividing a pointer analysis into three loosely coupled components: Graph, Rules and Solver, SVF provides an extensible interface for users to write their own solutions easily. SVF is publicly available at http://unsw-corg.github.io/SVF.
MarkUs: Drop-in use-after-free prevention for low-level languages Use-after-free vulnerabilities have plagued software written in low-level languages, such as C and C++, becoming one of the most frequent classes of exploited software bugs. Attackers identify code paths where data is manually freed by the programmer, but later incorrectly reused, and take advantage by reallocating the data to themselves. They then alter the data behind the program's back, using the erroneous reuse to gain control of the application and, potentially, the system. While a variety of techniques have been developed to deal with these vulnerabilities, they often have unacceptably high performance or memory overheads, especially in the worst case.We have designed MarkUs, a memory allocator that prevents this form of attack at low overhead, sufficient for deployment in real software, even under allocation- and memory-intensive scenarios. We prevent use-after-free attacks by quarantining data freed by the programmer and forbidding its reallocation until we are sure that there are no dangling pointers targeting it. To identify these we traverse live-objects accessible from registers and memory, marking those we encounter, to check whether quarantined data is accessible from any currently allocated location. Unlike garbage collection, which is unsafe in C and C++, MarkUs ensures safety by only freeing data that is both quarantined by the programmer and has no identifiable dangling pointers. The information provided by the programmer's allocations and frees further allows us to optimise the process by freeing physical addresses early for large objects, specialising analysis for small objects, and only performing marking when sufficient data is in quarantine. Using MarkUs, we reduce the overheads of temporal safety in low-level languages to 1.1× on average for SPEC CPU2006, with a maximum slowdown of only 2×, vastly improving upon the state-of-the-art.
SAFECode: enforcing alias analysis for weakly typed languages Static analysis of programs in weakly typed languages such as C and C++ is generally not sound because of possible memory errors due to dangling pointer references, uninitialized pointers, and array bounds overflow. We describe a compilation strategy for standard C programs that guarantees that aggressive interprocedural pointer analysis (or less precise ones), a call graph, and type information for a subset of memory, are never invalidated by any possible memory errors. We formalize our approach as a new type system with the necessary run-time checks in operational semantics and prove the correctness of our approach for a subset of C. Our semantics provide the foundation for other sophisticated static analyses to be applied to C programs with a guarantee of soundness. Our work builds on a previously published transformation called Automatic Pool Allocation to ensure that hard-to-detect memory errors (dangling pointer references and certain array bounds errors) cannot invalidate the call graph, points-to information or type information. The key insight behind our approach is that pool allocation can be used to create a run-time partitioning of memory that matches the compile-time memory partitioning in a points-to graph, and efficient checks can be used to isolate the run-time partitions. Furthermore, we show that the sound analysis information enables static checking techniques that eliminate many run-time checks. Our approach requires no source code changes, allows memory to be managedexplicitly, and does not use meta-data on pointers or individual tag bits for memory. Using several benchmark s and system codes, we show experimentally that the run-time overheads are low (less than 10% in nearly all cases and 30% in the worst case we have seen).We also show the effectiveness of static analyses in eliminating run-time checks.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Measuring the Gap Between FPGAs and ASICs ABSTRACT This paper presents experimental measurements of the differences between a 90nm CMOS FPGA and 90nm CMOS Standard Cell ASICs in terms of logic density, circuit speed and power consumption. We are motivated to make these measurements to enable system designers to make better informed choices between these two media and to give insight to FPGA makers on the deciencies to attack and thereby improve FPGAs. In the paper, we describe the methodology by which the measurements were obtained and we show that, for circuits containing only combinational logic and,ipops, the ratio of silicon area required to implement them in FPGAs and ASICs is on average 40. Modern FPGAs also contain \hard" blocks such as multiplier/accumulators and block memories,and we nd,that these blocks reduce this average area gap signican tly to as little as 21. The ratio of critical path delay, from FPGA to ASIC, is roughly 3 to 4, with less inuence from block memory and hard multipliers. The dynamic power consumption ratio is approximately 12 times and, with hard blocks, this gap generally becomes smaller. Categories and Subject Descriptors
Termination detection for diffusing computations
Fundamental control algorithms in mobile networks In this work we propose simple and efficient protocols forcounting and leader election in mobile networks. For mobilenetworks with fixed base stations we provide a new andvery efficient protocol for counting the number of mobilehosts. The main part of the work concentrates on ad-hocnetworks (no fixed subnetwork). We provide a model forthese networks and leader election (and a special form ofcounting) protocols for both named and anonymous mobilehosts. In this work we define two protocol ...
DySER: Unifying Functionality and Parallelism Specialization for Energy-Efficient Computing The DySER (Dynamically Specializing Execution Resources) architecture supports both functionality specialization and parallelism specialization. By dynamically specializing frequently executing regions and applying parallelism mechanisms, DySER provides efficient functionality and parallelism specialization. It outperforms an out-of-order CPU, Streaming SIMD Extensions (SSE) acceleration, and GPU acceleration while consuming less energy. The full-system field-programmable gate array (FPGA) prototype of DySER integrated into OpenSparc demonstrates a practical implementation.
Causality, influence, and computation in possibly disconnected synchronous dynamic networks In this work, we study the propagation of influence and computation in dynamic distributed computing systems that are possibly disconnected at every instant. We focus on a synchronous message-passing communication model with broadcast and bidirectional links. Our network dynamicity assumption is a worst-case dynamicity controlled by an adversary scheduler, which has received much attention recently. We replace the usual (in worst-case dynamic networks) assumption that the network is connected at every instant by minimal temporal connectivity conditions. Our conditions only require that another causal influence occurs within every time window of some given length. Based on this basic idea, we define several novel metrics for capturing the speed of information spreading in a dynamic network. We present several results that correlate these metrics. Moreover, we investigate termination criteria in networks in which an upper bound on any of these metrics is known. We exploit our termination criteria to provide efficient (and optimal in some cases) protocols that solve the fundamental counting and all-to-all token dissemination (or gossip) problems.
Understanding the regenerative comparator circuit The regenerative comparator circuit which lies at the heart of A/D conversion, slicer circuits, and memory sensing, is unstable, time-varying, nonlinear, and with multiple equilibria. That does not mean, as this paper shows, that it cannot be understood with simple equivalent circuits that reveal its dynamics completely, and enable it to be designed to specifications on static and dynamic offset and noise. The analysis is applied to the StrongArm latch.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1.2
0.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
0
A structured overlay for multi-dimensional range queries We introduce SONAR, a structured overlay to store and retrieve objects addressed by multi-dimensional names (keys). The overlay has the shape of a multi-dimensional torus, where each node is responsible for a contiguous part of the data space. A uniform distribution of keys on the data space is not necessary, because denser areas get assigned more nodes. To nevertheless support logarithmic routing, SONAR maintains, per dimension, fingers to other nodes, that span an exponentially increasing number of nodes. Most other overlays maintain such fingers in the key-space instead and therefore require a uniform data distribution. SONAR, in contrast, avoids hashing and is therefore able to perform range queries of arbitrary shape in a logarithmic number of routing steps--independent of the number of system- and query-dimensions. SONAR needs just one hop for updating an entry in its routing table: A longer finger is calculated by querying the node referred to by the next shorter finger for its shorter finger. This doubles the number of spanned nodes and leads to exponentially spaced fingers.
End-to-end routing behavior in the internet The large-scale behavior of routing In the Internet has gone virtually without any formal study, the exceptions being Chinoy's (1993) analysis of the dynamics of Internet routing information, and work, similar in spirit, by Labovitz, Malan, and Jahanian (see Proc. SIGCOMM'97, 1997). We report on an analysis of 40000 end-to-end route measurements conducted using repeated “traceroutes” between 37 Internet sites. We analyze the routing behavior for pathological conditions, routing stability, and routing symmetry. For pathologies, we characterize the prevalence of routing loops, erroneous routing, infrastructure failures, and temporary outages. We find that the likelihood of encountering a major routing pathology more than doubled between the end of 1994 and the end of 1995, rising from 1.5% to 3.3%. For routing stability, we define two separate types of stability, “prevalence”, meaning the overall likelihood that a particular route is encountered, and “persistence”, the likelihood that a route remains unchanged over a long period of time. We find that Internet paths are heavily dominated by a single prevalent route, but that the time periods over which routes persist show wide variation, ranging from seconds up to days. About two-thirds of the Internet paths had routes persisting for either days or weeks. For routing symmetry, we look at the likelihood that a path through the Internet visits at least one different city in the two directions. At the end of 1995, this was the case half the time, and at least one different autonomous system was visited 30% of the time
Why Do Internet Services Fail, and What Can Be Done About It? We describe the architecture, operational practices, and failure characteristics of three very large-scale Internet services. Our research on architecture and operational practices took the form of interviews with architects and operations staff at those (and several other) services. Our research on component and service failure took the form of examining the operations problem tracking databases from two of the services and a log of service failure post-mortem reports from the third. Architecturally, we find convergence on a common structure: division of nodes into service front-ends and back-ends, multiple levels of redundancy and load-balancing, and use of custom-written software for both production services and administrative tools. Operationally, we find a thin line between service developers and operators, and a need to coordinate problem detection and repair across administrative domains. With respect to failures, we find that operator errors are their primary cause, operator error is the most difficult type of failure to mask, service front-ends are responsible for more problems than service back-ends but fewer minutes of unavailability, and that online testing and more thoroughly exposing and detecting component failures could reduce system failure rates for at least one service.
Merging ring-structured overlay indices: toward network-data transparency. Peer-to-peer index structures distributed and managed over the planet, commonly known as structured overlays (e.g., distributed hash tables) have been touted to play the role of a fundamental building block for internet-scale distributed systems. Traditional designs consider incremental or possibly even parallelized construction of a single overlay, which implicitly assumes global control and coordination to enforce the construction of an unique overlay. However, if merger of originally isolated overlays is made possible, then one can realize decentralized bootstrapping of overlays. So to say, smaller overlays can be constructed using any of the traditional mechanisms, which can then organically coalesce to form a larger overlay. Such self-organizational attributes of decentralized bootstrapping and organic growth are of paramount importance for large scale systems. In our previous works, we explained the challenges of merging important families of (tree and ring) structured overlays (Datta and Aberer in international workshop on self-organizing systems, 2006), and identified that tree structured overlays are relatively easier to merge in a transparent manner (Datta in IEEE international conference on self-adaptive and self-organizing systems, 2007). In this paper we investigate how ring structured overlays can be merged, both in terms of the necessary algorithms, as well as how it performs during the merger process, taking into account not only the ‘network’ merger process, but also looking into how and whether this process is ‘transparent’ for applications/end-users accessing and using the overlay as an index. We introduce interesting new metrics to evaluate the merger process and carry out asymptotic analysis for estimating the same, besides conducting simulation experiments to validate the theory as well as measure other aspects of the overlays’ performance under merger.
P-Grid: a self-organizing structured P2P system Abstract: this paper was supported in part bythe National Competence Center in Research on MobileInformation and Communication Systems (NCCR-MICS), acenter supported by the Swiss National Science Foundationunder grant number 5005-67322 and by SNSF grant 2100064994,&quot;Peer-to-Peer Information Systems.&quot;messages. From the responses it (randomly) selectscertain peers to which direct network linksare established
Controlling the cost of reliability in peer-to-peer overlays Structured peer-to-peer overlay networks provide a useful substrate for building distributed applications but there are general concerns over the cost of maintaining these overlays. The current approach is to configure the overlays statically and conservatively to achieve the desired reliability even under uncommon adverse conditions. This results in high cost in the common case, or poor reliability in worse than expected conditions. We analyze the cost of overlay maintenance in realistic dynamic environments and design novel techniques to reduce this cost by adapting to the operating conditions. With our techniques, the concerns over the overlay maintenance cost are no longer warranted. Simulations using real traces show that they enable high reliability and performance even in very adverse conditions with low maintenance cost.
Designing Less-Structured P2P Systems for the Expected High Churn We address the problem of highly transient populations in unstructured and loosely structured peer-to-peer (P2P) systems. We propose a number of illustrative query-related strategies and organizational protocols that, by taking into consideration the expected session times of peers (their lifespans), yield systems with performance characteristics more resilient to the natural instability of their environments. We first demonstrate the benefits of lifespan-based organizational protocols in terms of end-application performance and in the context of dynamic and heterogeneous Internet environments. We do this using a number of currently adopted and proposed query-related strategies, including methods for query distribution, caching, and replication. We then show, through trace-driven simulation and wide-area experimentation, the performance advantages of lifespan-based, query-related strategies when layered over currently employed and lifespan-based organizational protocols. While merely illustrative, the evaluated strategies and protocols clearly demonstrate the advantages of considering peers' session time in designing widely-deployed P2P systems.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
ImageNet Large Scale Visual Recognition Challenge. The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.
Incremental Validation of XML Documents We investigate the incremental validation of XML documents with respect to DTDs and XML Schemas, under updates consisting of element tag renamings, insertions and deletions. DTDs are modeled as extended context-free grammars and XML Schemas are abstracted as "specialized DTDs", allowing to decouple element types from element tags. For DTDs, we exhibit an O(m log n) incremental validation algorithm using an auxiliary structure of size O(n), where n is the size of the document and m the number of updates. For specialized DTDs, we provide an O(m log2 n) incremental algorithm, again using an auxiliary structure of size O(n). This is a significant improvement over brute-force re-validation from scratch.
Enabling open-source cognitively-controlled collaboration among software-defined radio nodes Software-defined radios (SDRs) are now recognized as a key building block for future wireless communications. We have spent the past year enhancing existing open software to create a software-defined data radio. This radio extends the notion of software-defined behavior to higher layers in the protocol stack: most importantly through the media access layer. Our particular approach to the problem has been guided by the desire to allow fine-grained cognitive control of the radio. We describe our system, Adaptive Dynamic Radio Open-source Intelligent Team (ADROIT).
A Dht-Based Discovery Service For The Internet Of Things Current trends towards the Future Internet are envisaging the conception of novel services endowed with context-aware and autonomic capabilities to improve end users' quality of life. The Internet of Things paradigm is expected to contribute towards this ambitious vision by proposing models and mechanisms enabling the creation of networks of "smart things" on a large scale. It is widely recognized that efficient mechanisms for discovering available resources and capabilities are required to realize such vision. The contribution of this work consists in a novel discovery service for the Internet of Things. The proposed solution adopts a peer-to-peer approach for guaranteeing scalability, robustness, and easy maintenance of the overall system. While most existing peer-to-peer discovery services proposed for the IoT support solely exact match queries on a single attribute (i. e., the object identifier), our solution can handle multiattribute and range queries. We defined a layered approach by distinguishing three main aspects: multiattribute indexing, range query support, peer-to-peer routing. We chose to adopt an over-DHT indexing scheme to guarantee ease of design and implementation principles. We report on the implementation of a Proof of Concept in a dangerous goods monitoring scenario, and, finally, we discuss test results for structural properties and query performance evaluation.
A Highly Adaptive Leader Election Algorithm for Mobile Ad Hoc Networks.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.077649
0.071704
0.071704
0.071704
0.038074
0.01737
0.003259
0.000224
0
0
0
0
0
0
A Flash-Based Non-Uniform Sampling ADC With Hybrid Quantization Enabling Digital Anti-Aliasing Filter. This paper introduces different classes of analog-to-digital converter (ADC) architecture that non-uniformly samples the analog input and shifts from conventional voltage quantization to a hybrid quantization paradigm wherein both voltage and time quantization are utilized. In this architecture, the sampling rate adapts to the input frequency, which maintains an alias-free spectrum and enables an ...
An 8-bit 100-mhz cmos linear interpolation dac An 8-bit 100-MHz CMOS linear interpolation digital-to-analog converter (DAC) is presented. It applies a time-interleaved structure on an 8-bit binary-weighted DAC, using 16 evenly skewed clocks generated by a voltage-controlled delay line to realize the linear interpolation function. The linear interpolation increases the attenuation of the DAC&#39;s image components. The requirement for the analog re...
Active-RC Filters Using the Gm-Assisted OTA-RC Technique. The linearity of conventional active-RC filters is limited by the operational transconductance amplifiers (OTAs) used in the integrators. Transconductance-capacitance (Gm-C) filters are fast and can be linear- however, they are sensitive to parasitic capacitances. We explore the Gm-assisted OTA-RC technique, which is a way of combining Gm-C and active-RC integrators in a manner that enhances the l...
A 40 nm CMOS Derivative-Free IF Active-RC BPF With Programmable Bandwidth and Center Frequency Achieving Over 30 dBm IIP3 An 8th-order Cheybshev-II ladder active-RC BPF was fabricated in a 40 nm CMOS low-leakage digital process. A new technique to design for zero capacitance spread (ZCS) is reported to enable the application of integrator frequency compensation (IFC). Combined with wideband op-amps employing a current-reusing, split-path feed-forward compensation (FFC) technique, significant power savings is achieved in a standard 40 nm CMOS process. The BPF measures a center frequency (CF) of 85–225 MHz and four programmable bandwidth-to-center frequency (BW/CF) ratios of 5%–40%. Both the CF and BW/CF ratio are digitally programmable. It also measures a maximum in-band IIP3 of 31 dBm at 0 dB gain and a maximum in-band frequency-response deviation of 0.2 dB while consuming 33 mA from a 1.5 V supply.
Compressed Level Crossing Sampling for Ultra-Low Power IoT Devices. Level crossing sampling (LCS) is a power-efficient analog-to-digital conversion scheme for spikelike signals that arise in many Internet of Things-enabled automotive and environmental monitoring applications. However, LCS scheme requires a dedicated time-to-digital converter with large dynamic range specifications. In this paper, we present a compressed LCS that exploits the signal sparsity in the...
How to Feed a Growing Population—An IoT Approach to Crop Health and Growth Humankind consumes the planet’s resources beyond what the Earth can provide with estimations according 9.2 billion human beings in 2050 consuming yearly almost twice the resources the Earth can provide. Crop production needs to increase by 70% to be able to feed all the population. A convenient way to address this issue is to produce efficiently respecting the planet. Non-invasive technology at a ...
A 61-nW Level-Crossing ADC With Adaptive Sampling for Biomedical Applications. An ultra-low-power level-crossing analog-to-digital converter (LC-ADC) with on-chip adaptive sampling is presented. Different from conventional ADCs based on Nyquist sampling, LC-ADC utilizes sparsity of signals for low power data acquisition. To save power, the proposed adaptive sampling scheme is implemented with only one scaler and one high-precision comparator, which is in sharp contrast to co...
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Deep learning Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech. Machine-learning technology powers many aspects of modern society: from web searches to content filtering on social networks to recommendations on e-commerce websites, and it is increasingly present in consumer products such as cameras and smartphones. Machine-learning systems are used to identify objects in images, transcribe speech into text, match news items, posts or products with users' interests, and select relevant results of search. Increasingly, these applications make use of a class of techniques called deep learning. Conventional machine-learning techniques were limited in their ability to process natural data in their raw form. For decades, constructing a pattern-recognition or machine-learning system required careful engineering and considerable domain expertise to design a feature extractor that transformed the raw data (such as the pixel values of an image) into a suitable internal representation or feature vector from which the learning subsystem, often a classifier, could detect or classify patterns in the input. Representation learning is a set of methods that allows a machine to be fed with raw data and to automatically discover the representations needed for detection or classification. Deep-learning methods are representation-learning methods with multiple levels of representation, obtained by composing simple but non-linear modules that each transform the representation at one level (starting with the raw input) into a representation at a higher, slightly more abstract level. With the composition of enough such transformations, very complex functions can be learned. For classification tasks, higher layers of representation amplify aspects of the input that are important for discrimination and suppress irrelevant variations. An image, for example, comes in the form of an array of pixel values, and the learned features in the first layer of representation typically represent the presence or absence of edges at particular orientations and locations in the image. The second layer typically detects motifs by spotting particular arrangements of edges, regardless of small variations in the edge positions. The third layer may assemble motifs into larger combinations that correspond to parts of familiar objects, and subsequent layers would detect objects as combinations of these parts. The key aspect of deep learning is that these layers of features are not designed by human engineers: they are learned from data using a general-purpose learning procedure. Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years. It has turned out to be very good at discovering intricate structures in high-dimensional data and is therefore applicable to many domains of science, business and government. In addition to beating records in image recognition1, 2, 3, 4 and speech recognition5, 6, 7, it has beaten other machine-learning techniques at predicting the activity of potential drug molecules8, analysing particle accelerator data9, 10, reconstructing brain circuits11, and predicting the effects of mutations in non-coding DNA on gene expression and disease12, 13. Perhaps more surprisingly, deep learning has produced extremely promising results for various tasks in natural language understanding14, particularly topic classification, sentiment analysis, question answering15 and language translation16, 17. We think that deep learning will have many more successes in the near future because it requires very little engineering by hand, so it can easily take advantage of increases in the amount of available computation and data. New learning algorithms and architectures that are currently being developed for deep neural networks will only accelerate this progress. The most common form of machine learning, deep or not, is supervised learning. Imagine that we want to build a system that can classify images as containing, say, a house, a car, a person or a pet. We first collect a large data set of images of houses, cars, people and pets, each labelled with its category. During training, the machine is shown an image and produces an output in the form of a vector of scores, one for each category. We want the desired category to have the highest score of all categories, but this is unlikely to happen before training. We compute an objective function that measures the error (or distance) between the output scores and the desired pattern of scores. The machine then modifies its internal adjustable parameters to reduce this error. These adjustable parameters, often called weights, are real numbers that can be seen as 'knobs' that define the input–output function of the machine. In a typical deep-learning system, there may be hundreds of millions of these adjustable weights, and hundreds of millions of labelled examples with which to train the machine. To properly adjust the weight vector, the learning algorithm computes a gradient vector that, for each weight, indicates by what amount the error would increase or decrease if the weight were increased by a tiny amount. The weight vector is then adjusted in the opposite direction to the gradient vector. The objective function, averaged over all the training examples, can be seen as a kind of hilly landscape in the high-dimensional space of weight values. The negative gradient vector indicates the direction of steepest descent in this landscape, taking it closer to a minimum, where the output error is low on average. In practice, most practitioners use a procedure called stochastic gradient descent (SGD). This consists of showing the input vector for a few examples, computing the outputs and the errors, computing the average gradient for those examples, and adjusting the weights accordingly. The process is repeated for many small sets of examples from the training set until the average of the objective function stops decreasing. It is called stochastic because each small set of examples gives a noisy estimate of the average gradient over all examples. This simple procedure usually finds a good set of weights surprisingly quickly when compared with far more elaborate optimization techniques18. After training, the performance of the system is measured on a different set of examples called a test set. This serves to test the generalization ability of the machine — its ability to produce sensible answers on new inputs that it has never seen during training. Many of the current practical applications of machine learning use linear classifiers on top of hand-engineered features. A two-class linear classifier computes a weighted sum of the feature vector components. If the weighted sum is above a threshold, the input is classified as belonging to a particular category. Since the 1960s we have known that linear classifiers can only carve their input space into very simple regions, namely half-spaces separated by a hyperplane19. But problems such as image and speech recognition require the input–output function to be insensitive to irrelevant variations of the input, such as variations in position, orientation or illumination of an object, or variations in the pitch or accent of speech, while being very sensitive to particular minute variations (for example, the difference between a white wolf and a breed of wolf-like white dog called a Samoyed). At the pixel level, images of two Samoyeds in different poses and in different environments may be very different from each other, whereas two images of a Samoyed and a wolf in the same position and on similar backgrounds may be very similar to each other. A linear classifier, or any other 'shallow' classifier operating on raw pixels could not possibly distinguish the latter two, while putting the former two in the same category. This is why shallow classifiers require a good feature extractor that solves the selectivity–invariance dilemma — one that produces representations that are selective to the aspects of the image that are important for discrimination, but that are invariant to irrelevant aspects such as the pose of the animal. To make classifiers more powerful, one can use generic non-linear features, as with kernel methods20, but generic features such as those arising with the Gaussian kernel do not allow the learner to generalize well far from the training examples21. The conventional option is to hand design good feature extractors, which requires a considerable amount of engineering skill and domain expertise. But this can all be avoided if good features can be learned automatically using a general-purpose learning procedure. This is the key advantage of deep learning. A deep-learning architecture is a multilayer stack of simple modules, all (or most) of which are subject to learning, and many of which compute non-linear input–output mappings. Each module in the stack transforms its input to increase both the selectivity and the invariance of the representation. With multiple non-linear layers, say a depth of 5 to 20, a system can implement extremely intricate functions of its inputs that are simultaneously sensitive to minute details — distinguishing Samoyeds from white wolves — and insensitive to large irrelevant variations such as the background, pose, lighting and surrounding objects. From the earliest days of pattern recognition22, 23, the aim of researchers has been to replace hand-engineered features with trainable multilayer networks, but despite its simplicity, the solution was not widely understood until the mid 1980s. As it turns out, multilayer architectures can be trained by simple stochastic gradient descent. As long as the modules are relatively smooth functions of their inputs and of their internal weights, one can compute gradients using the backpropagation procedure. The idea that this could be done, and that it worked, was discovered independently by several different groups during the 1970s and 1980s24, 25, 26, 27. The backpropagation procedure to compute the gradient of an objective function with respect to the weights of a multilayer stack of modules is nothing more than a practical application of the chain rule for derivatives. The key insight is that the derivative (or gradient) of the objective with respect to the input of a module can be computed by working backwards from the gradient with respect to the output of that module (or the input of the subsequent module) (Fig. 1). The backpropagation equation can be applied repeatedly to propagate gradients through all modules, starting from the output at the top (where the network produces its prediction) all the way to the bottom (where the external input is fed). Once these gradients have been computed, it is straightforward to compute the gradients with respect to the weights of each module. Many applications of deep learning use feedforward neural network architectures (Fig. 1), which learn to map a fixed-size input (for example, an image) to a fixed-size output (for example, a probability for each of several categories). To go from one layer to the next, a set of units compute a weighted sum of their inputs from the previous layer and pass the result through a non-linear function. At present, the most popular non-linear function is the rectified linear unit (ReLU), which is simply the half-wave rectifier f(z) = max(z, 0). In past decades, neural nets used smoother non-linearities, such as tanh(z) or 1/(1 + exp(−z)), but the ReLU typically learns much faster in networks with many layers, allowing training of a deep supervised network without unsupervised pre-training28. Units that are not in the input or output layer are conventionally called hidden units. The hidden layers can be seen as distorting the input in a non-linear way so that categories become linearly separable by the last layer (Fig. 1). In the late 1990s, neural nets and backpropagation were largely forsaken by the machine-learning community and ignored by the computer-vision and speech-recognition communities. It was widely thought that learning useful, multistage, feature extractors with little prior knowledge was infeasible. In particular, it was commonly thought that simple gradient descent would get trapped in poor local minima — weight configurations for which no small change would reduce the average error. In practice, poor local minima are rarely a problem with large networks. Regardless of the initial conditions, the system nearly always reaches solutions of very similar quality. Recent theoretical and empirical results strongly suggest that local minima are not a serious issue in general. Instead, the landscape is packed with a combinatorially large number of saddle points where the gradient is zero, and the surface curves up in most dimensions and curves down in the remainder29, 30. The analysis seems to show that saddle points with only a few downward curving directions are present in very large numbers, but almost all of them have very similar values of the objective function. Hence, it does not much matter which of these saddle points the algorithm gets stuck at. Interest in deep feedforward networks was revived around 2006 (refs 31,32,33,34) by a group of researchers brought together by the Canadian Institute for Advanced Research (CIFAR). The researchers introduced unsupervised learning procedures that could create layers of feature detectors without requiring labelled data. The objective in learning each layer of feature detectors was to be able to reconstruct or model the activities of feature detectors (or raw inputs) in the layer below. By 'pre-training' several layers of progressively more complex feature detectors using this reconstruction objective, the weights of a deep network could be initialized to sensible values. A final layer of output units could then be added to the top of the network and the whole deep system could be fine-tuned using standard backpropagation33, 34, 35. This worked remarkably well for recognizing handwritten digits or for detecting pedestrians, especially when the amount of labelled data was very limited36. The first major application of this pre-training approach was in speech recognition, and it was made possible by the advent of fast graphics processing units (GPUs) that were convenient to program37 and allowed researchers to train networks 10 or 20 times faster. In 2009, the approach was used to map short temporal windows of coefficients extracted from a sound wave to a set of probabilities for the various fragments of speech that might be represented by the frame in the centre of the window. It achieved record-breaking results on a standard speech recognition benchmark that used a small vocabulary38 and was quickly developed to give record-breaking results on a large vocabulary task39. By 2012, versions of the deep net from 2009 were being developed by many of the major speech groups6 and were already being deployed in Android phones. For smaller data sets, unsupervised pre-training helps to prevent overfitting40, leading to significantly better generalization when the number of labelled examples is small, or in a transfer setting where we have lots of examples for some 'source' tasks but very few for some 'target' tasks. Once deep learning had been rehabilitated, it turned out that the pre-training stage was only needed for small data sets. There was, however, one particular type of deep, feedforward network that was much easier to train and generalized much better than networks with full connectivity between adjacent layers. This was the convolutional neural network (ConvNet)41, 42. It achieved many practical successes during the period when neural networks were out of favour and it has recently been widely adopted by the computer-vision community. ConvNets are designed to process data that come in the form of multiple arrays, for example a colour image composed of three 2D arrays containing pixel intensities in the three colour channels. Many data modalities are in the form of multiple arrays: 1D for signals and sequences, including language; 2D for images or audio spectrograms; and 3D for video or volumetric images. There are four key ideas behind ConvNets that take advantage of the properties of natural signals: local connections, shared weights, pooling and the use of many layers. The architecture of a typical ConvNet (Fig. 2) is structured as a series of stages. The first few stages are composed of two types of layers: convolutional layers and pooling layers. Units in a convolutional layer are organized in feature maps, within which each unit is connected to local patches in the feature maps of the previous layer through a set of weights called a filter bank. The result of this local weighted sum is then passed through a non-linearity such as a ReLU. All units in a feature map share the same filter bank. Different feature maps in a layer use different filter banks. The reason for this architecture is twofold. First, in array data such as images, local groups of values are often highly correlated, forming distinctive local motifs that are easily detected. Second, the local statistics of images and other signals are invariant to location. In other words, if a motif can appear in one part of the image, it could appear anywhere, hence the idea of units at different locations sharing the same weights and detecting the same pattern in different parts of the array. Mathematically, the filtering operation performed by a feature map is a discrete convolution, hence the name. Although the role of the convolutional layer is to detect local conjunctions of features from the previous layer, the role of the pooling layer is to merge semantically similar features into one. Because the relative positions of the features forming a motif can vary somewhat, reliably detecting the motif can be done by coarse-graining the position of each feature. A typical pooling unit computes the maximum of a local patch of units in one feature map (or in a few feature maps). Neighbouring pooling units take input from patches that are shifted by more than one row or column, thereby reducing the dimension of the representation and creating an invariance to small shifts and distortions. Two or three stages of convolution, non-linearity and pooling are stacked, followed by more convolutional and fully-connected layers. Backpropagating gradients through a ConvNet is as simple as through a regular deep network, allowing all the weights in all the filter banks to be trained. Deep neural networks exploit the property that many natural signals are compositional hierarchies, in which higher-level features are obtained by composing lower-level ones. In images, local combinations of edges form motifs, motifs assemble into parts, and parts form objects. Similar hierarchies exist in speech and text from sounds to phones, phonemes, syllables, words and sentences. The pooling allows representations to vary very little when elements in the previous layer vary in position and appearance. The convolutional and pooling layers in ConvNets are directly inspired by the classic notions of simple cells and complex cells in visual neuroscience43, and the overall architecture is reminiscent of the LGN–V1–V2–V4–IT hierarchy in the visual cortex ventral pathway44. When ConvNet models and monkeys are shown the same picture, the activations of high-level units in the ConvNet explains half of the variance of random sets of 160 neurons in the monkey's inferotemporal cortex45. ConvNets have their roots in the neocognitron46, the architecture of which was somewhat similar, but did not have an end-to-end supervised-learning algorithm such as backpropagation. A primitive 1D ConvNet called a time-delay neural net was used for the recognition of phonemes and simple words47, 48. There have been numerous applications of convolutional networks going back to the early 1990s, starting with time-delay neural networks for speech recognition47 and document reading42. The document reading system used a ConvNet trained jointly with a probabilistic model that implemented language constraints. By the late 1990s this system was reading over 10% of all the cheques in the United States. A number of ConvNet-based optical character recognition and handwriting recognition systems were later deployed by Microsoft49. ConvNets were also experimented with in the early 1990s for object detection in natural images, including faces and hands50, 51, and for face recognition52. Since the early 2000s, ConvNets have been applied with great success to the detection, segmentation and recognition of objects and regions in images. These were all tasks in which labelled data was relatively abundant, such as traffic sign recognition53, the segmentation of biological images54 particularly for connectomics55, and the detection of faces, text, pedestrians and human bodies in natural images36, 50, 51, 56, 57, 58. A major recent practical success of ConvNets is face recognition59. Importantly, images can be labelled at the pixel level, which will have applications in technology, including autonomous mobile robots and self-driving cars60, 61. Companies such as Mobileye and NVIDIA are using such ConvNet-based methods in their upcoming vision systems for cars. Other applications gaining importance involve natural language understanding14 and speech recognition7. Despite these successes, ConvNets were largely forsaken by the mainstream computer-vision and machine-learning communities until the ImageNet competition in 2012. When deep convolutional networks were applied to a data set of about a million images from the web that contained 1,000 different classes, they achieved spectacular results, almost halving the error rates of the best competing approaches1. This success came from the efficient use of GPUs, ReLUs, a new regularization technique called dropout62, and techniques to generate more training examples by deforming the existing ones. This success has brought about a revolution in computer vision; ConvNets are now the dominant approach for almost all recognition and detection tasks4, 58, 59, 63, 64, 65 and approach human performance on some tasks. A recent stunning demonstration combines ConvNets and recurrent net modules for the generation of image captions (Fig. 3). Recent ConvNet architectures have 10 to 20 layers of ReLUs, hundreds of millions of weights, and billions of connections between units. Whereas training such large networks could have taken weeks only two years ago, progress in hardware, software and algorithm parallelization have reduced training times to a few hours. The performance of ConvNet-based vision systems has caused most major technology companies, including Google, Facebook, Microsoft, IBM, Yahoo!, Twitter and Adobe, as well as a quickly growing number of start-ups to initiate research and development projects and to deploy ConvNet-based image understanding products and services. ConvNets are easily amenable to efficient hardware implementations in chips or field-programmable gate arrays66, 67. A number of companies such as NVIDIA, Mobileye, Intel, Qualcomm and Samsung are developing ConvNet chips to enable real-time vision applications in smartphones, cameras, robots and self-driving cars. Deep-learning theory shows that deep nets have two different exponential advantages over classic learning algorithms that do not use distributed representations21. Both of these advantages arise from the power of composition and depend on the underlying data-generating distribution having an appropriate componential structure40. First, learning distributed representations enable generalization to new combinations of the values of learned features beyond those seen during training (for example, 2n combinations are possible with n binary features)68, 69. Second, composing layers of representation in a deep net brings the potential for another exponential advantage70 (exponential in the depth). The hidden layers of a multilayer neural network learn to represent the network's inputs in a way that makes it easy to predict the target outputs. This is nicely demonstrated by training a multilayer neural network to predict the next word in a sequence from a local context of earlier words71. Each word in the context is presented to the network as a one-of-N vector, that is, one component has a value of 1 and the rest are 0. In the first layer, each word creates a different pattern of activations, or word vectors (Fig. 4). In a language model, the other layers of the network learn to convert the input word vectors into an output word vector for the predicted next word, which can be used to predict the probability for any word in the vocabulary to appear as the next word. The network learns word vectors that contain many active components each of which can be interpreted as a separate feature of the word, as was first demonstrated27 in the context of learning distributed representations for symbols. These semantic features were not explicitly present in the input. They were discovered by the learning procedure as a good way of factorizing the structured relationships between the input and output symbols into multiple 'micro-rules'. Learning word vectors turned out to also work very well when the word sequences come from a large corpus of real text and the individual micro-rules are unreliable71. When trained to predict the next word in a news story, for example, the learned word vectors for Tuesday and Wednesday are very similar, as are the word vectors for Sweden and Norway. Such representations are called distributed representations because their elements (the features) are not mutually exclusive and their many configurations correspond to the variations seen in the observed data. These word vectors are composed of learned features that were not determined ahead of time by experts, but automatically discovered by the neural network. Vector representations of words learned from text are now very widely used in natural language applications14, 17, 72, 73, 74, 75, 76. The issue of representation lies at the heart of the debate between the logic-inspired and the neural-network-inspired paradigms for cognition. In the logic-inspired paradigm, an instance of a symbol is something for which the only property is that it is either identical or non-identical to other symbol instances. It has no internal structure that is relevant to its use; and to reason with symbols, they must be bound to the variables in judiciously chosen rules of inference. By contrast, neural networks just use big activity vectors, big weight matrices and scalar non-linearities to perform the type of fast 'intuitive' inference that underpins effortless commonsense reasoning. Before the introduction of neural language models71, the standard approach to statistical modelling of language did not exploit distributed representations: it was based on counting frequencies of occurrences of short symbol sequences of length up to N (called N-grams). The number of possible N-grams is on the order of VN, where V is the vocabulary size, so taking into account a context of more than a handful of words would require very large training corpora. N-grams treat each word as an atomic unit, so they cannot generalize across semantically related sequences of words, whereas neural language models can because they associate each word with a vector of real valued features, and semantically related words end up close to each other in that vector space (Fig. 4). When backpropagation was first introduced, its most exciting use was for training recurrent neural networks (RNNs). For tasks that involve sequential inputs, such as speech and language, it is often better to use RNNs (Fig. 5). RNNs process an input sequence one element at a time, maintaining in their hidden units a 'state vector' that implicitly contains information about the history of all the past elements of the sequence. When we consider the outputs of the hidden units at different discrete time steps as if they were the outputs of different neurons in a deep multilayer network (Fig. 5, right), it becomes clear how we can apply backpropagation to train RNNs. RNNs are very powerful dynamic systems, but training them has proved to be problematic because the backpropagated gradients either grow or shrink at each time step, so over many time steps they typically explode or vanish77, 78. Thanks to advances in their architecture79, 80 and ways of training them81, 82, RNNs have been found to be very good at predicting the next character in the text83 or the next word in a sequence75, but they can also be used for more complex tasks. For example, after reading an English sentence one word at a time, an English 'encoder' network can be trained so that the final state vector of its hidden units is a good representation of the thought expressed by the sentence. This thought vector can then be used as the initial hidden state of (or as extra input to) a jointly trained French 'decoder' network, which outputs a probability distribution for the first word of the French translation. If a particular first word is chosen from this distribution and provided as input to the decoder network it will then output a probability distribution for the second word of the translation and so on until a full stop is chosen17, 72, 76. Overall, this process generates sequences of French words according to a probability distribution that depends on the English sentence. This rather naive way of performing machine translation has quickly become competitive with the state-of-the-art, and this raises serious doubts about whether understanding a sentence requires anything like the internal symbolic expressions that are manipulated by using inference rules. It is more compatible with the view that everyday reasoning involves many simultaneous analogies that each contribute plausibility to a conclusion84, 85. Instead of translating the meaning of a French sentence into an English sentence, one can learn to 'translate' the meaning of an image into an English sentence (Fig. 3). The encoder here is a deep ConvNet that converts the pixels into an activity vector in its last hidden layer. The decoder is an RNN similar to the ones used for machine translation and neural language modelling. There has been a surge of interest in such systems recently (see examples mentioned in ref. 86). RNNs, once unfolded in time (Fig. 5), can be seen as very deep feedforward networks in which all the layers share the same weights. Although their main purpose is to learn long-term dependencies, theoretical and empirical evidence shows that it is difficult to learn to store information for very long78. To correct for that, one idea is to augment the network with an explicit memory. The first proposal of this kind is the long short-term memory (LSTM) networks that use special hidden units, the natural behaviour of which is to remember inputs for a long time79. A special unit called the memory cell acts like an accumulator or a gated leaky neuron: it has a connection to itself at the next time step that has a weight of one, so it copies its own real-valued state and accumulates the external signal, but this self-connection is multiplicatively gated by another unit that learns to decide when to clear the content of the memory. LSTM networks have subsequently proved to be more effective than conventional RNNs, especially when they have several layers for each time step87, enabling an entire speech recognition system that goes all the way from acoustics to the sequence of characters in the transcription. LSTM networks or related forms of gated units are also currently used for the encoder and decoder networks that perform so well at machine translation17, 72, 76. Over the past year, several authors have made different proposals to augment RNNs with a memory module. Proposals include the Neural Turing Machine in which the network is augmented by a 'tape-like' memory that the RNN can choose to read from or write to88, and memory networks, in which a regular network is augmented by a kind of associative memory89. Memory networks have yielded excellent performance on standard question-answering benchmarks. The memory is used to remember the story about which the network is later asked to answer questions. Beyond simple memorization, neural Turing machines and memory networks are being used for tasks that would normally require reasoning and symbol manipulation. Neural Turing machines can be taught 'algorithms'. Among other things, they can learn to output a sorted list of symbols when their input consists of an unsorted sequence in which each symbol is accompanied by a real value that indicates its priority in the list88. Memory networks can be trained to keep track of the state of the world in a setting similar to a text adventure game and after reading a story, they can answer questions that require complex inference90. In one test example, the network is shown a 15-sentence version of the The Lord of the Rings and correctly answers questions such as “where is Frodo now?”89. Unsupervised learning91, 92, 93, 94, 95, 96, 97, 98 had a catalytic effect in reviving interest in deep learning, but has since been overshadowed by the successes of purely supervised learning. Although we have not focused on it in this Review, we expect unsupervised learning to become far more important in the longer term. Human and animal learning is largely unsupervised: we discover the structure of the world by observing it, not by being told the name of every object. Human vision is an active process that sequentially samples the optic array in an intelligent, task-specific way using a small, high-resolution fovea with a large, low-resolution surround. We expect much of the future progress in vision to come from systems that are trained end-to-end and combine ConvNets with RNNs that use reinforcement learning to decide where to look. Systems combining deep learning and reinforcement learning are in their infancy, but they already outperform passive vision systems99 at classification tasks and produce impressive results in learning to play many different video games100. Natural language understanding is another area in which deep learning is poised to make a large impact over the next few years. We expect systems that use RNNs to understand sentences or whole documents will become much better when they learn strategies for selectively attending to one part at a time76, 86. Ultimately, major progress in artificial intelligence will come about through systems that combine representation learning with complex reasoning. Although deep learning and simple reasoning have been used for speech and handwriting recognition for a long time, new paradigms are needed to replace rule-based manipulation of symbolic expressions by operations on large vectors101. Download references The authors would like to thank the Natural Sciences and Engineering Research Council of Canada, the Canadian Institute For Advanced Research (CIFAR), the National Science Foundation and Office of Naval Research for support. Y.L. and Y.B. are CIFAR fellows. Reprints and permissions information is available at www.nature.com/reprints.
Reliable Multimedia Transmission Over Cognitive Radio Networks Using Fountain Codes With the explosive growth of wireless multimedia applications over the wireless Internet in recent years, the demand for radio spectral resources has increased significantly. In order to meet the quality of service, delay, and large bandwidth requirements, various techniques such as source and channel coding, distributed streaming, multicast etc. have been considered. In this paper, we propose a technique for distributed multimedia transmission over the secondary user network, which makes use of opportunistic spectrum access with the help of cognitive radios. We use digital fountain codes to distribute the multimedia content over unused spectrum and also to compensate for the loss incurred due to primary user interference. Primary user traffic is modelled as a Poisson process. We develop the techniques to select appropriate channels and study the trade-offs between link reliability, spectral efficiency and coding overhead. Simulation results are presented for the secondary spectrum access model.
A process calculus for Mobile Ad Hoc Networks We present the @w-calculus, a process calculus for formally modeling and reasoning about Mobile Ad Hoc Wireless Networks (MANETs) and their protocols. The @w-calculus naturally captures essential characteristics of MANETs, including the ability of a MANET node to broadcast a message to any other node within its physical transmission range (and no others), and to move in and out of the transmission range of other nodes in the network. A key feature of the @w-calculus is the separation of a node's communication and computational behavior, described by an @w-process, from the description of its physical transmission range, referred to as an @w-process interface. Our main technical results are as follows. We give a formal operational semantics of the @w-calculus in terms of labeled transition systems and show that the state reachability problem is decidable for finite-control @w-processes. We also prove that the @w-calculus is a conservative extension of the @p-calculus, and that late bisimulation equivalence (appropriately lifted from the @p-calculus to the @w-calculus) is a congruence. Congruence results are also established for a weak version of late bisimulation equivalence, which abstracts away from two types of internal actions: @t-actions, as in the @p-calculus, and @m-actions, signaling node movement. We additionally define a symbolic semantics for the @w-calculus extended with the mismatch operator, along with a corresponding notion of symbolic bisimulation equivalence, and establish congruence results for this extension as well. Finally, we illustrate the practical utility of the calculus by developing and analyzing formal models of a leader election protocol for MANETs and the AODV routing protocol.
A 13-b 40-MSamples/s CMOS pipelined folding ADC with background offset trimming Two key concepts of pipelining and background offset trimming are applied to demonstrate a 13-b 40-MSamples/s CMOS analog-to-digital converter (ADC) based on the basic folding and interpolation architecture. Folding amplifier stages made of simple differential pairs are pipelined using distributed interstage track-and-holders. Background offset trimming implemented with a highly oversampling delta-sigma modulator enhances the resolution of the CMOS folders beyond 12 bits. The background offset trimming circuit continuously measures and adjusts the offsets of the folding amplifiers without interfering with the normal operation. The prototype system is further refined using subranging and digital correction, and exhibits a spurious-free dynamic range (SFDR) of 82 dB at 40 MSamples/s. The measured differential nonlinearity (DNL) and integral nonlinearity (INL) are about /spl plusmn/0.5 and /spl plusmn/2.0 LSB, respectively. The chip fabricated in 0.5-/spl mu/m CMOS occupies 8.7 mm/sup 2/ and consumes 800 mW at 5 V.
An efficient low-cost fixed-point digital down converter with modified filter bank In radar system, as the most important part of IF radar receiver, digital down converter (DDC) extracts the baseband signal needed from modulated IF signal, and down-samples the signal with decimation factor of 20. This paper proposes an efficient low-cost structure of DDC, including NCO, mixer and a modified filter bank. The modified filter bank adopts a high-efficiency structure, including a 5-stage CIC filter, a 9-tap CFIR filter and a 15-tap HB filter, which reduces the complexity and cost of implementation compared with the traditional filter bank. Then an optimized fixed-point programming is designed in order to implement DDC on fixed-point DSP or FPGA. The simulation results show that the proposed DDC achieves an expectant specification in application of IF radar receiver.
A 0.5 V 10-bit 3 MS/s SAR ADC With Adaptive-Reset Switching Scheme and Near-Threshold Voltage-Optimized Design Technique This brief presents a 10-bit ultra-low power energy-efficient successive approximation register (SAR) analog-to-digital converter (ADC). A new adaptive-reset switching scheme is proposed to reduce the switching energy of the capacitive digital-to-analog converter (CDAC). The proposed adaptive-reset switching scheme reduces the average switching energy of the CDAC by 90% compared to the conventional scheme without the common-mode voltage variation. In addition, the near-threshold voltage (NTV)-optimized digital library is adopted to alleviate the performance degradation in the ultra-low supply voltage while simultaneously increasing the energy efficiency. The NTV-optimized design technique is also introduced to the bootstrapped switch design to improve the linearity of the sample-and-hold circuit. The test chip is fabricated in a 65 nm CMOS, and its core area is 0.022 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> . At a supply of 0.5 V and sampling speed of 3 MS/s, the SAR ADC achieves an ENOB of 8.78 bit and consumes <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$3.09~{\boldsymbol{\mu }}\text{W}$ </tex-math></inline-formula> . The resultant Walden figure-of-merit (FoM) is 2.34 fJ/conv.-step.
1.05
0.05
0.05
0.05
0.05
0.05
0.02
0
0
0
0
0
0
0
Distributed reset A reset subsystem is designed that can be embedded in an arbitrary distributed system in order to allow the system processes to reset the system when necessary. Our design is layered, and comprises three main components: a leader election, a spanning tree construction, and a diffusing computation. Each of these components is self-stabilizing in the following sense: if the coordination between the up-processes in the system is ever lost (due to failures or repairs of processes and channels), then each component eventually reaches a state where coordination is regained. This capability makes our reset subsystem very robust: it can tolerate fail-stop failures and repairs of processes and channels, even when a reset is in progress
Unifying stabilization and termination in message-passing systems The paper dispels the myth that it is impossible for a message-passing program to be both terminating and stabilizing. We consider a rather general notion of termination: a terminating program eventually stops its execution after the environment ceases lo provide input. We identify termination-symmetry to be a necessary condition for a problem to admit a sollution with such properties. Our results do confirm that a number of well-known problems (e.g., consensus, leader election) do not allow a terminating and stabilizing solution. On the flip side, they show that other problems such as mutual exclusion and reliable-transmission allow such solutions. We present a message-passing solution to the mutual exclusion problem that is both stabilizing and terminating. We also desctibe an approach of adding termination to a stabilizing program. To illustrate this approach, we add termination to a stabilizing solution for the reliable transmission problem.
Preserving Stabilization While Practically Bounding State Space Stabilization is a key dependability property for dealing with unanticipated transient faults, as it guarantees that even in the presence of such faults, the system will recover to states where it satisfies its specification. One of the desirable attributes of stabilization is the use of bounded space for each variable.In this paper, we present an algorithm that transforms a stabilizingprogram that uses variables with unbounded domain into astabilizing program that uses bounded variables and (practicallybounded) physical time. While non-stabilizing programs (that donot handle transient faults) can deal with unbounded variables byassigning large enough but bounded space, stabilizing programs-that need to deal with arbitrary transient faults- cannot dothe same since a transient fault may corrupt the variable to itsmaximum value.We show that our transformation algorithm is applicable toseveral problems including logical clocks, vector clocks, mutualexclusion, leader election, diffusing computations, Paxos basedconsensus, and so on. Moreover, our approach can also be usedto bound counters used in an earlier work by Katz and Perry foradding stabilization to a non-stabilizing program. By combiningour algorithm with that earlier work by Katz and Perry, it wouldbe possible to provide stabilization for a rich class of problems, by assigning large enough but bounded space for variables.
A Mobility Based Metric for Clustering in Mobile Ad Hoc Networks Abstract: This paper presents a novel relative mobility metric for mobile ad hoc networks (MANETs). It is based on the ratio of power levels due to successive receptions at each node from its neighbors. We propose a distributed clustering algorithm, MOBIC, based on the use of this mobility metric for selection of clusterheads, and demonstrate that it leads to more stable cluster formation than the "least clusterhead change" version of the well known Lowest-ID clustering algorithm [3]. We show reduction of as much as 33% in the rate of clusterhead changes owing to the use of the proposed technique. In a MANET that uses scalable cluster-based services, network performance metrics such as throughput and delay are tightly coupled with the frequency of cluster reorganization. Therefore, we believe that using MOBIC can result in a more stable configuration, and thus yield better performance.
Max-Min D-Cluster Formation in Wireless Ad Hoc Networks An ad hoc network may be logically represented as a set of clusters. The clusterheads form a -hop dominating set. Each node is at most hops from a clusterhead. Clusterheads form a virtual backbone and may be used to route packets for nodes in their cluster. Previous heuristics restricted themselves to -hop clusters. We show that the minimum -hop dominating set problem is NP-complete. Then we present a heuristic to form -clusters in a wireless ad hoc network. Nodes are assumed to have non-deterministic mobility pattern. Clusters are formed by diffusing node identities along the wireless links. When the heuristic terminates, a node either becomes a clusterhead, or is at most wireless hops away from its clusterhead. The value of is a parameter of the heuristic. The heuristic can be run either at regular intervals, or whenever the network configura- tion changes. One of the features of the heuristic is that it t ends to re-elect existing clusterheads even when the network configuration c hanges. This helps to reduce the communication overheads during transition from old clusterheads to new clusterheads. Also, there is a tendency to evenly dis- tribute the mobile nodes among the clusterheads, and evently distribute the responsibility of acting as clusterheads among all nodes. Thus, the heuristic is fair and stable. Simulation experiments demonstrate that the proposed heuristic is better than the two earlier heuristics, namely the LCA (1) and Degree based (11) solutions.
Fundamental control algorithms in mobile networks In this work we propose simple and efficient protocols forcounting and leader election in mobile networks. For mobilenetworks with fixed base stations we provide a new andvery efficient protocol for counting the number of mobilehosts. The main part of the work concentrates on ad-hocnetworks (no fixed subnetwork). We provide a model forthese networks and leader election (and a special form ofcounting) protocols for both named and anonymous mobilehosts. In this work we define two protocol ...
Parsec: A Parallel Simulation Environment for Complex Systems Design and development costs for extremely large systems could be significantly reduced if only there were efficient techniques for evaluating design alternatives and predicting their impact on overall system performance metrics. Due to the systems' analytical intractability, simulation is the most common performance evaluation technique for such systems. However, the long execution times needed for sequential simulation models often hampers evaluation. The slow speeds of sequential model execution have led to growing interest in the use of parallel execution for simulating large-scale systems. Widespread use of parallel simulation, however, has been significantly hindered by a lack of tools for integrating parallel model execution into the overall framework of system simulation. Another drawback to wide-spread use of simulations is the cost of model design and maintenance. The simulation environment the authors developed at UCLA attempts to address some of these issues. It consists of three primary components: a parallel simulation language called Parsec (parallel simulation environment for complex systems), its GUI, called Pave, and the portable runtime system that implements the simulation algorithms.
Block-secure: Blockchain based scheme for secure P2P cloud storage. With the development of Internet technology, the volume of data is increasing tremendously. To tackle with large-scale data, more and more applications choose to enlarge the storage capacity of users’ terminals with the help of cloud platforms. Before storing data to an untrusted cloud server, some measures should be adopted to guarantee the data security. However, the communication overhead will increase dramatically when users transmit files encrypted by a traditional encryption scheme. In this paper, we address the above problems by proposing a blockchain-based security architecture for distributed cloud storage, where users can divide their own files into encrypted data chunks, and upload those data chunks randomly into the P2P network nodes that provide free storage capacity. We customize a genetic algorithm to solve the file block replica placement problem between multiple users and multiple data centers in the distributed cloud storage environment. Numerical results show that the proposed architecture outperforms the traditional cloud storage architectures in terms of file security and network transmission delay. On average, the file loss rate based on the simulation assumptions utilized in this paper is close to 0% on our architecture while it’s nearly 100% and 71.66% on the architecture with single data center and the distributed architecture using genetic algorithm. Besides, with proposed scheme, the transmission delay on the proposed architecture is reduced by 39.28% and 76.47% on average on the user’s number and the number of file block replicas, respectively, in comparison to the architecture with single data center. Meanwhile, the transmission delay of file block replicas is also reduced by 41.36% on average than that on the distributed architecture using genetic algorithm.
A fast and elitist multiobjective genetic algorithm: NSGA-II Multi-objective evolutionary algorithms (MOEAs) that use non-dominated sorting and sharing have been criticized mainly for: (1) their O(MN3) computational complexity (where M is the number of objectives and N is the population size); (2) their non-elitism approach; and (3) the need to specify a sharing parameter. In this paper, we suggest a non-dominated sorting-based MOEA, called NSGA-II (Non-dominated Sorting Genetic Algorithm II), which alleviates all of the above three difficulties. Specifically, a fast non-dominated sorting approach with O(MN2) computational complexity is presented. Also, a selection operator is presented that creates a mating pool by combining the parent and offspring populations and selecting the best N solutions (with respect to fitness and spread). Simulation results on difficult test problems show that NSGA-II is able, for most problems, to find a much better spread of solutions and better convergence near the true Pareto-optimal front compared to the Pareto-archived evolution strategy and the strength-Pareto evolutionary algorithm - two other elitist MOEAs that pay special attention to creating a diverse Pareto-optimal front. Moreover, we modify the definition of dominance in order to solve constrained multi-objective problems efficiently. Simulation results of the constrained NSGA-II on a number of test problems, including a five-objective, seven-constraint nonlinear problem, are compared with another constrained multi-objective optimizer, and the much better performance of NSGA-II is observed
MorphoSys: An Integrated Reconfigurable System for Data-Parallel and Computation-Intensive Applications This paper introduces MorphoSys, a reconfigurable computing system developed to investigate the effectiveness of combining reconfigurable hardware with general-purpose processors for word-level, computation-intensive applications. MorphoSys is a coarse-grain, integrated, and reconfigurable system-on-chip, targeted at high-throughput and data-parallel applications. It is comprised of a reconfigurable array of processing cells, a modified RISC processor core, and an efficient memory interface unit. This paper describes the MorphoSys architecture, including the reconfigurable processor array, the control processor, and data and configuration memories. The suitability of MorphoSys for the target application domain is then illustrated with examples such as video compression, data encryption and target recognition. Performance evaluation of these applications indicates improvements of up to an order of magnitude (or more) on MorphoSys, in comparison with other systems.
Thresholded samplers for UWB impulse radar This paper presents two novel methods for sampling the backscatter in an impulse radar system. The authors have called the two related methods for swept threshold and stochastic resonance sampling. The samplers are simple, mostly digital circuits which are not clocked, but instead utilize continuous-time signal processing. Since fine-pitch CMOS is not very good for analog processing, but instead has very fast digital logic, the samplers are well suited for this technology. An implementation in 90 nm CMOS is described and measurements which confirm a working 23 GHz sampler are shown
Control of robotic mobility-on-demand systems: A queueing-theoretical perspective AbstractIn this paper we present queueing-theoretical methods for the modeling, analysis, and control of autonomous mobility-on-demand (MOD) systems wherein robotic, self-driving vehicles transport customers within an urban environment and rebalance themselves to ensure acceptable quality of service throughout the network. We first cast an autonomous MOD system within a closed Jackson network model with passenger loss. It is shown that an optimal rebalancing algorithm minimizing the number of (autonomously) rebalancing vehicles while keeping vehicle availabilities balanced throughout the network can be found by solving a linear program. The theoretical insights are used to design a robust, real-time rebalancing algorithm, which is applied to a case study of New York City and implemented on an eight-vehicle mobile robot testbed. The case study of New York shows that the current taxi demand in Manhattan can be met with about 8,000 robotic vehicles (roughly 70% of the size of the current taxi fleet operating in Manhattan). Finally, we extend our queueing-theoretical setup to include congestion effects, and study the impact of autonomously rebalancing vehicles on overall congestion. Using a simple heuristic algorithm, we show that additional congestion due to autonomous rebalancing can be effectively avoided on a road network. Collectively, this paper provides a rigorous approach to the problem of system-wide coordination of autonomously driving vehicles, and provides one of the first characterizations of the sustainability benefits of robotic transportation networks.
20.3 A feedforward controlled on-chip switched-capacitor voltage regulator delivering 10W in 32nm SOI CMOS On-chip (or fully integrated) switched-capacitor (SC) voltage regulators (SCVR) have recently received a lot of attention due to their ease of monolithic integration. The use of deep trench capacitors can lead to SCVR implementations that simultaneously achieve high efficiency, high power density, and fast response time. For the application of granular power distribution of many-core microprocessor systems, the on-chip SCVR must maintain an output voltage above a certain minimum level Uout, min in order for the microprocessor core to meet setup time requirements. Following a transient load change, the output voltage typically exhibits a droop due to parasitic inductances and resistances in the power distribution network. Therefore, the steady-state output voltage is kept high enough to ensure VOUT >Vout, min at all times, thereby introducing an output voltage overhead that leads to increased system power consumption. The output voltage droop can be reduced by implementing fast regulation and a sufficient amount of on-chip decoupling capacitance. However, a large amount of on-chip decoupling capacitance is needed to significantly reduce the droop, and it becomes impractical to implement owing to the large chip area overhead required. This paper presents a feedforward control scheme that significantly reduces the output voltage droop in the presence of a large input voltage droop following a transient event. This in turn reduces the required output voltage overhead and may lead to significant overall system power savings.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.04054
0.045403
0.035741
0.030359
0.017076
0.008195
0.000336
0.000059
0.000001
0
0
0
0
0
Integrated 0.35 µm BiCMOS DC-DC Boost Converter The simulation and experimental study of current-mode DC-DC boost converter is presented in this paper The DC-DC converter is designed with a standard 0.35µm BiCMOS process. The off-chip LC filter is operated with the inductance of 1mH and capacitance of 100µF. The simulation results show the high performance DC-DC boost converter. The output voltage from simulation is obtained to be 5.8V with ripple ratio of 1.5%. The result corresponds with the experimental result within 5% error. The sensing current is obtained to be within 1mA and follows to fit the order of the aspect ratio between sensing and power MOSFET.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A 12-Bit 75-MS/s Pipelined ADC Using Incomplete Settling The residue amplifiers in high-speed pipelined analog-to-digital converters (ADCs) typically determine the converter&#39;s overall speed and power performance. We propose a mixed-signal technique that exploits incomplete settling to achieve low power residue amplification. In the first stage of a 12-bit, 75-MS/s proof-of-concept prototype, the employed open-loop residue amplifier dissipates only 2.9 m...
Recent Advances and Trends in Noise Shaping SAR ADCs This brief presents an overview of the recent advances in noise shaping SAR ADCs. It discusses the fundamentals behind the noise shaping operation and the two main implementation topologies. Error feedback and cascaded integrators feedforward topologies are examined. Active and passive circuit level implementations, with emphasis in deep nanometre CMOS processes, are discussed along with the associated design trade-offs. Trends in the area such as multi-stage and cascaded implementations are also discussed.
An Area-Efficient SAR ADC With Mismatch Error Shaping Technique Achieving 102-dB SFDR 90.2-dB SNDR Over 20-kHz Bandwidth To minimize the area of analog-to-digital converters (ADCs) for multichannel applications and break the SNDR limitation caused by DAC-induced nonlinearity, a more area-efficient mismatch error shaping (MES) scheme is proposed in noise shaping (NS) successive-approximation (SAR) ADC. By employing a switched-capacitor (SC) voltage divider, the proposed method makes the flash ADC and data weighted averaging (DWA) digital circuits in the original MES method obsolete. Therefore, the complexity of the architecture is highly reduced, and the area is significantly reduced to 0.037 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> . In addition, the power consumption for the MES implementation is significantly decreased by more than 7.3%. The proposed SAR ADC is fabricated in GF 40-nm CMOS technology. The measurements show that the proposed MES technique improves the SFDR from 64 to 102 dB and decreases the THD from -63.6 to -95.7 dB. The SNR and SNDR in a 20-kHz bandwidth are 91.6 and 90.2 dB, respectively. The power consumption is 383.4 μW with a 1.1-V power supply at a 16-MS/s sampling rate.
Error-Feedback Mismatch Error Shaping for High-Resolution Data Converters Device mismatch is a key concern for high-resolution data converters. This paper presents a comprehensive study of the error-feedback (EF)-based mismatch error shaping (MES) technique. EF MES overcomes the key challenge of the classic dynamic element matching-based MES whose complexity grows exponentially with the number of bits; however, the prior EF MES comes with the limitations of limited shaping capability and reduced dynamic range. This paper demonstrates how to perform more advanced EF MES for various types of data converters. Moreover, this paper also proposes the use of digital prediction to address the dynamic range loss issue.
Noise Analysis for Comparator-Based Circuits Noise analysis for comparator-based circuits is presented. The goal is to gain insight into the different sources of noise in these circuits for design purposes. After the general analysis techniques are established, they are applied to different noise sources in the comparator-based switched-capacitor pipeline analog-to-digital converter (ADC). The results show that the noise from the virtual ground threshold detection comparator dominates the overall ADC noise performance. The noise from the charging current can also be significant, depending on the size of the capacitors used, but the contribution was small in the prototype. The other noise sources have contributions comparable to those in op-amp-based designs, and their effects can be managed through appropriate design. In the prototype, folded flicker noise was found to be a significant contributor to the broadband noise because the flicker noise of the comparator extends beyond the Nyquist rate of the converter.
22.6 A 2.2GS/s 7b 27.4mW time-based folding-flash ADC with resistively averaged voltage-to-time amplifiers High-speed low-resolution ADCs are widely used for various applications, such as 60GHz receivers, serial links, and high-density disk drive systems. Flash architectures have the highest conversion rate without employing time interleaving. Moreover, flash architectures have the lowest latency, which is often required in feedback-loop systems. However, the area and power consumption are exponentially increased by increasing the resolution since the number of comparators must be 2N. A folding architecture is a well-known technique to reduce the number of comparators in an ADC while maintaining high sampling rate and low latency [1,2]. Folding architectures were previously realized by generating a number of zero crossings with folding amplifiers. However, the conventional folding amplifiers consume a large amount of power to realize a fast response. In contrast, a folding ADC with only dynamic power consumption and without using amplifiers is reported in [3]. However, only a folding factor of 2 is realized, and therefore the number of comparators is reduced by half.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
Chains of recurrences—a method to expedite the evaluation of closed-form functions Chains of Recurrences (CR's) are introduced as an effective method to evaluate functions at regular intervals. Algebraic properties of CR's are examined and an algorithm that constructs a CR for a given function is explained. Finally, an implementation of the method in MAXIMA/Common Lisp is discussed.
The PARSEC benchmark suite: characterization and architectural implications This paper presents and characterizes the Princeton Application Repository for Shared-Memory Computers (PARSEC), a benchmark suite for studies of Chip-Multiprocessors (CMPs). Previous available benchmarks for multiprocessors have focused on high-performance computing applications and used a limited number of synchronization methods. PARSEC includes emerging applications in recognition, mining and synthesis (RMS) as well as systems applications which mimic large-scale multithreaded commercial programs. Our characterization shows that the benchmark suite covers a wide spectrum of working sets, locality, data sharing, synchronization and off-chip traffic. The benchmark suite has been made available to the public.
Side-Channel Leaks in Web Applications: A Reality Today, a Challenge Tomorrow With software-as-a-service becoming mainstream, more and more applications are delivered to the client through the Web. Unlike a desktop application, a web application is split into browser-side and server-side components. A subset of the application’s internal information flows are inevitably exposed on the network. We show that despite encryption, such a side-channel information leak is a realistic and serious threat to user privacy. Specifically, we found that surprisingly detailed sensitive information is being leaked out from a number of high-profile, top-of-the-line web applications in healthcare, taxation, investment and web search: an eavesdropper can infer the illnesses/medications/surgeries of the user, her family income and investment secrets, despite HTTPS protection; a stranger on the street can glean enterprise employees' web search queries, despite WPA/WPA2 Wi-Fi encryption. More importantly, the root causes of the problem are some fundamental characteristics of web applications: stateful communication, low entropy input for better interaction, and significant traffic distinctions. As a result, the scope of the problem seems industry-wide. We further present a concrete analysis to demonstrate the challenges of mitigating such a threat, which points to the necessity of a disciplined engineering practice for side-channel mitigations in future web application developments.
Linear Amplification with Nonlinear Components A technique for producing bandpass linear amplification with nonlinear components (LINC) is described. The bandpass signal first is separated into two constant envelope component signals. All of the amplitude and phase information of the original bandpass signal is contained in phase modulation on the component signals. These constant envelope signals can be amplified or translated in frequency by amplifiers or mixers which have nonlinear input-output amplitude transfer characteristics. Passive linear combining of the amplified and/or translated component signals produces an amplified and/or translated replica of the original signal.
Cross-layer sensors for green cognitive radio. Green cognitive radio is a cognitive radio (CR) that is aware of sustainable development issues and deals with an additional constraint as regards the decision-making function of the cognitive cycle. In this paper, it is explained how the sensors distributed throughout the different layers of our CR model could help on taking the best decision in order to best contribute to sustainable development.
27.9 A 200kS/s 13.5b integrated-fluxgate differential-magnetic-to-digital converter with an oversampling compensation loop for contactless current sensing High voltage applications such as electric motor controllers, solar panel power inverters, electric vehicle battery chargers, uninterrupted and switching mode power supplies benefit from the galvanic isolation of contactless current sensors (CCS) [1]. These include magnetic sensors that sense the magnetic field emanating from a current-carrying conductor. The offset and resolution of Hall-effect sensors is in the μT-level [1-3], in contrast to the μT-level accuracy of integrated-fluxgate (IFG) magnetometers [4]. Previously reported sampled-data closed-loop IFG readouts have limited BWs as their sampling frequencies (4) are limited to be less than or equal to the IFG excitation frequency, fEXC [5-7]. This paper describes a differential closed-loop IFG CCS with fs>fEXC. The differential architecture rejects magnetic stray fields and achieves 750x larger BW than the prior closed-loop IFG readouts [6-7] with 10×better offset than the Hall-effect sensors [1-3].
A Hybrid 1<sup>st</sup>/2<sup>nd</sup>-Order VCO-Based CTDSM With Rail-to-Rail Artifact Tolerance for Bidirectional Neural Interface Bi-directional brain machine interfaces enable simultaneous brain activity monitoring and neural modulation. However stimulation artifact can saturate instrumentation front-end, while concurrent on-site recording is needed. This brief presents a voltage controlled-oscillator (VCO) based continuous-time <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\rm \Delta \Sigma $ </tex-math></inline-formula> modulator (CTDSM) with rail-to-rail input range and fast artifact tracking. A hybrid <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}/2^{nd}$ </tex-math></inline-formula> -order loop is designed to achieve high dynamic range (DR) and large input range. Stimulation artifact is detected by a phase counter and compensated by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}$ </tex-math></inline-formula> -order loop. The residue signal is digitized by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2^{nd}$ </tex-math></inline-formula> -order loop for high precision. Redundancy between the two loops is implemented as feedback capacitors elements with non-binary ratio to guarantee feedback stability and linearity. Fabricated in a 55-nm CMOS process, the prototype achieves 65.7dB SNDR across 10 kHz bandwidth with a full scale of 200 mV <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">pp</sub> . And a ±1.2 V input range is achieved to suppress artifacts. Saline based experiment with simultaneous stimulation and recording demonstrates that the implemented system can track and tolerate rail-to-rail stimulation artifact within 30 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{s}$ </tex-math></inline-formula> while small neural signal can be continuously monitored.
1.1
0.1
0.1
0.05
0.033333
0.005882
0
0
0
0
0
0
0
0
A Direct Digitizing Chopped Neural Recorder Using a Body-Induced Offset Based DC Servo Loop This article presents a direct digitizing neural recorder that uses a body-induced offset based DC servo loop to cancel electrode offset (EDO) on-chip. The bulk of the input pair is used to create an offset, counteracting the EDO. The architecture does not require AC coupling capacitors which enables the use of chopping without impedance boosting while maintaining a large input impedance of 238 M <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\Omega$</tex-math></inline-formula> over the whole 10 kHz bandwidth. Implemented in a 180 nm HV-CMOS process, the prototype occupies a silicon area of only 0.02 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> while consuming 12.8 μW and achieving 1.82 μV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$_{\text{rms}}$</tex-math></inline-formula> of input-referred noise in the local field potential (LFP) band and a NEF of 5.75.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Complex bandpass sampling and direct downconversion of multiband analytic signals The kernel concept of the software defined radio architecture is to eliminate the analogue mixers and to place analogue-to-digital converters as near the antenna as possible. Bandpass sampling can be used for direct downconversion without analogue mixers. In this paper, we report an efficient method to find the ranges of valid bandpass sampling frequency for direct downconverting multiple bandpass analytic signals (single-sideband RF signals). The algorithm results in the ranges of valid bandpass sampling frequency for the complex signals in terms of bandwidths and band positions of the single-sideband RF signals. Compared to real bandpass sampling, the valid sampling frequency ranges are easier to find and the ranges thus obtained having much wider interval than those of real sampling. As a consequence, the complex sampling scheme is more flexible in choosing sampling frequency and more robust to the sampling frequency variation. Furthermore, if spectral inversion is not permitted, then in some cases there will have no applicable sampling frequency under Nyquist rate for real sampling.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Reconfigurable Processor LSI Based on ALU Array with Limitations of Connections of ALUs for Software Radio We have developed an original reconfigurable processor LSI for software radio systems of consumer products. The die size of the LSI is 5.65mm by 5.65mm. The processor bases on ALU array architecture and has original limitations of connections of ALUs. It has small circuit size because of the limitations. And it has high processing performance because it has multithreads. We also have developed a prototype of a broadcasting receiver with the LSIs. The prototype has realized the real time reception of 3 kinds of broadcasts by changing software with a maximum of 2 LSIs. By the development, we have advanced a realization of consumer products with software radio.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Efficient PageRank and SpMV Computation on AMD GPUs Google's famous PageRank algorithm is widely used to determine the importance of web pages in search engines. Given the large number of web pages on the World Wide Web, efficient computation of PageRank becomes a challenging problem. We accelerated the power method for computing PageRank on AMD GPUs. The core component of the power method is the Sparse Matrix-Vector Multiplication (SpMV). Its performance is largely determined by the characteristics of the sparse matrix, such as sparseness and distribution of non-zero values. Based on careful analysis on the web linkage matrices, we design a fast and scalable SpMV routine with three passes, using a modified Compressed Sparse Row format. Our PageRank computation achieves 15x speedup on a Radeon 5870 Graphic Card compared with a PhenomII 965 CPU at 3.4GHz. Our method can easily adapt to large scale data sets. We also compare the performance of the same method on the OpenCL platform with our low-level implementation.
k2-Trees for Compact Web Graph Representation This paper presents a Web graph representation based on a compact tree structure that takes advantage of large empty areas of the adjacency matrix of the graph. Our results show that our method is competitive with the best alternatives in the literature, offering a very good compression ratio (3.3---5.3 bits per link) while permitting fast navigation on the graph to obtain direct as well as reverse neighbors (2---15 microseconds per neighbor delivered). Moreover, it allows for extended functionality not usually considered in compressed graph representations.
Accelerating sparse matrix-vector multiplication on GPUs using bit-representation-optimized schemes The sparse matrix-vector (SpMV) multiplication routine is an important building block used in many iterative algorithms for solving scientific and engineering problems. One of the main challenges of SpMV is its memory-boundedness. Although compression has been proposed previously to improve SpMV performance on CPUs, its use has not been demonstrated on the GPU because of the serial nature of many compression and decompression schemes. In this paper, we introduce a family of bit-representation-optimized (BRO) compression schemes for representing sparse matrices on GPUs. The proposed schemes, BRO-ELL, BRO-COO, and BRO-HYB, perform compression on index data and help to speed up SpMV on GPUs through reduction of memory traffic. Furthermore, we formulate a BRO-aware matrix reodering scheme as a data clustering problem and use it to increase compression ratios. With the proposed schemes, experiments show that average speedups of 1.5× compared to ELLPACK and HYB can be achieved for SpMV on GPUs.
Compression-aware graph computation. Many recent work has focused on graph algorithms via parallelization including PowerGraph [9] and Ligra [14]. The frameworks process large graphs in shared memory, requiring a terabyte of memory and expensive maintenance cost. Reducing graph size to fit in memory thus is crucial in cutting the cost of large-scale graph computation. Compression has been widely used to reduce graph size. However, it could meanwhile compromise graph computation efficiency caused by nontrivial decompression overhead before graph computation. In this paper, we propose a simple and yet efficient coding scheme. It not only leads to smaller size of compressed graphs; meanwhile we can perform graph computation directly on the compressed graphs with no or partial decompression, namely compression-aware computation, leading to faster running time. Our experiments validate that the coding scheme achieves 2.99X compression ratio, and three compression-aware graph algorithms achieve 7.02X, 2.88X and 2.34X faster running time than the graph algorithms on the graphs without compression.
Implementing Push-Pull Efficiently In Graphblas We factor Beamer's push-pull, also known as direction-optimized breadth-first-search (DOBFS) into 3 separable optimizations, and analyze them for generalizability, asymptotic speedup, and contribution to overall speedup. We demonstrate that masking is critical for high performance and can be generalized to all graph algorithms where the sparsity pattern of the output is known a priori. We show that these graph algorithm optimizations, which together constitute DOBFS, can be neatly and separably described using linear algebra and can be expressed in the GraphBLAS linear-algebrabased framework. We provide experimental evidence that with these optimizations, a DOBFS expressed in a linear-algebra-based graph framework attains competitive performance with state-of-the-art graph frameworks on the GPU and on a multi-threaded CPU, achieving 101 GTEPS on a Scale 22 RMAT graph.
A fast GPU algorithm for graph connectivity Graphics processing units provide a large compu- tational power at a very low price which position them as an ubiquitous accelerator. General purpose programming on the graphics processing units (GPGPU) is best suited for regular data parallel algorithms. They are not directly amenable for algorithms which have irregular data access patterns such as list ranking, and finding the connected components of a graph, and the like. In this work, we present a GPU-optimized implementation for finding the connected components of a given graph. Our implementation tries to minimize the impact of irregularity, both at the data level and functional level. Our implementation achieves a speed up of 9 to 12 times over the best sequential CPU implementation. For instance, our implementation finds connected components of a graph of 10 million nodes and 60 million edges in about 500 milliseconds on a GPU, given a random edge list. We also draw interesting observations on why PRAM algorithms, such as the Shiloach- Vishkin algorithm may not be a good fit for the GPU and how they should be modified.
Computation Reuse in DNNs by Exploiting Input Similarity. In recent years, Deep Neural Networks (DNNs) have achieved tremendous success for diverse problems such as classification and decision making. Efficient support for DNNs on CPUs, GPUs and accelerators has become a prolific area of research, resulting in a plethora of techniques for energy-efficient DNN inference. However, previous proposals focus on a single execution of a DNN. Popular applications, such as speech recognition or video classification, require multiple back-to-back executions of a DNN to process a sequence of inputs (e.g., audio frames, images). In this paper, we show that consecutive inputs exhibit a high degree of similarity, causing the inputs/outputs of the different layers to be extremely similar for successive frames of speech or images of a video. Based on this observation, we propose a technique to reuse some results of the previous execution, instead of computing the entire DNN. Computations related to inputs with negligible changes can be avoided with minor impact on accuracy, saving a large percentage of computations and memory accesses. We propose an implementation of our reuse-based inference scheme on top of a state-of-the-art DNN accelerator. Results show that, on average, more than 60% of the inputs of any neural network layer tested exhibit negligible changes with respect to the previous execution. Avoiding the memory accesses and computations for these inputs results in 63% energy savings on average.
FlexRAM: Toward an advanced Intelligent Memory system Major advances in Merged Logic DRAM (MLD) technology coupled with the popularization of memory-intensive applications provide fertile ground for architectures based on Intelligent Memory (IRAM) or Processors-in-Memory (PIM). The contribution of this paper is to explore one way to use the current state-of-the-art MLD technology for general-purpose computers. To satisfy requirements of general purpose and low programming cost, we place the PIM chips in the memory system and let them default to plain DRAM if the application is not enabled for intelligent memory. Since wide usability is crucial, we identify and analyze a range of real applications for PIM. Based on the requirements of these applications and current technological constraints, we design a PIM chip and a PIM-based memory system. We call the chip FlexRAM. We describe FlexRAMs design and floorplan, and the resulting memory system. Evaluation of the system through simulations shows that 4 FlexRAM chips often allow a workstation to run 25-40 times faster.
Long short-term memory. Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.
Estimation of entropy and mutual information We present some new results on the nonparametric estimation of entropy and mutual information. First, we use an exact local expansion of the entropy function to prove almost sure consistency and central limit theorems for three of the most commonly used discretized information estimators. The setup is related to Grenander's method of sieves and places no assumptions on the underlying probability measure generating the data. Second, we prove a converse to these consistency theorems, demonstrating that a misapplication of the most common estimation techniques leads to an arbitrarily poor estimate of the true information, even given unlimited data. This "inconsistency" theorem leads to an analytical approximation of the bias, valid in surprisingly small sample regimes and more accurate than the usual 1/N formula of Miller and Madow over a large region of parameter space. The two most practical implications of these results are negative: (1) information estimates in a certain data regime are likely contaminated by bias, even if "bias-corrected" estimators are used, and (2) confidence intervals calculated by standard techniques drastically underestimate the error of the most common estimation methods.Finally, we note a very useful connection between the bias of entropy estimators and a certain polynomial approximation problem. By casting bias calculation problems in this approximation theory framework, we obtain the best possible generalization of known asymptotic bias results. More interesting, this framework leads to an estimator with some nice properties: the estimator comes equipped with rigorous bounds on the maximum error over all possible underlying probability distributions, and this maximum error turns out to be surprisingly small. We demonstrate the application of this new estimator on both real and simulated data.
Advancing wireless link signatures for location distinction Location distinction is the ability to determine when a device has changed its position. We explore the opportunity to use sophisticated PHY-layer measurements in wireless networking systems for location distinction. We first compare two existing location distinction methods - one based on channel gains of multi-tonal probes, and another on channel impulse response. Next, we combine the benefits of these two methods to develop a new link measurement that we call the complex temporal signature. We use a 2.4 GHz link measurement data set, obtained from CRAWDAD [10], to evaluate the three location distinction methods. We find that the complex temporal signature method performs significantly better compared to the existing methods. We also perform new measurements to understand and model the temporal behavior of link signatures over time. We integrate our model in our location distinction mechanism and significantly reduce the probability of false alarms due to temporal variations of link signatures.
AMD Fusion APU: Llano The Llano variant of the AMD Fusion accelerated processor unit (APU) deploys AMD Turbo CORE technology to maximize processor performance within the system's thermal design limits. Low-power design and performance/watt ratio optimization were key design approaches, and power gating is implemented pervasively across the APU.
Design Of Q-Enhanced Class-C Vco With Robust Start-Up And High Oscillation Stability A novel topology of Q (quality factor)-enhanced dynamically self-biasing Class-C VCO is proposed in this article. It introduces a bridging capacitor to enhance the quality factor of the oscillator. The enhancement of the quality factor suppresses the squegging phenomenon and the harmonic distortion, and thus improves the phase noise and oscillation stability. The prototype of the proposed circuit was fabricated in SMIC 0.18 mu m CMOS process and the measurement results showed a low phase noise of -125 dBc/Hz@ 1MHz from 3.331 GHz carrier with a total power consumption of 3.36mW from a 1.2V supply. The proposed work exhibited an excellent Figure of Merit (FoM) of -190 dBc/Hz.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.072353
0.066667
0.066667
0.066667
0.066667
0.034012
0.001481
0.000342
0.000003
0
0
0
0
0
Evolution on SoC Integration: GSM Baseband-Radio in 0.13 μm CMOS Extended by Fully Integrated Power Management Unit GSM baseband-radios system-on-chip (SoC) fabricated in CMOS technology are well established on the market. The next evolutionary step on the proceeding integration path is the extension of the baseband-radio functionality by further integration of power-management-unit (PMU) functionality. This PMU of a mobile phone is normally realized as a separate chip. The integration of the PMU promises lowest phone cost, eases PCB design, reduces phone development time, and enables overall system optimization. However, several integration challenges have to be tackled. An SoC is presented for which the aforementioned challenges were successfully managed. The achieved performance will be demonstrated with corresponding measurement results.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
Spurious Tone Suppression Techniques Applied to a Wide-Bandwidth 2.4 GHz Fractional- N PLL This paper demonstrates that spurious tones in the output of a fractional-N PLL can be reduced by replacing the DeltaSigma modulator with a new type of digital quantizer and adding a charge pump offset combined with a sampled loop filter. It describes the underlying mechanisms of the spurious tones, proposes techniques that mitigate the effects of the mechanisms, and presents a phase noise cancell...
A 0.5 V 1.1 MS/sec 6.3 fJ/Conversion-Step SAR-ADC With Tri-Level Comparator in 40 nm CMOS This paper presents an extremely low-voltage operation and power efficient successive-approximation-register (SAR) analog-to-digital converter (ADC). Tri-level comparator is proposed to relax the speed requirement of the comparator and decrease the resolution of internal Digital-to-Analog Converter (DAC) by 1-bit. The internal charge redistribution DAC employs unit capacitance of 0.5 fF and ADC operates at nearly thermal noise limitation. To deal with the problem of capacitor mismatch, reconfigurable capacitor array and calibration procedure were developed. The prototype ADC fabricated using 40 nm CMOS process achieves 46.8 dB SNDR and 58.2 dB SFDR with 1.1 MS/sec at 0.5 V power supply. The FoM is 6.3-fJ/conversion step and the chip die area is only 160 μm × 70 μm.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Signature Driven Hierarchical Post-Manufacture Tuning of RF Systems for Performance and Power Integration of RF circuits in deeply scaled CMOS technologies and severe process variation in those technology nodes result in poor manufacturing yield. A post-manufacture tuning approach for yield improvement of RF systems is developed in this paper that uses hierarchical behavioral models of the RF systems. The proposed method first determines module-level performances from the system-level response (signature) to an applied test using top-down model diagnosis. Then, constrained optimization is performed to determine the best module-level tuning parameter values that satisfy system-level specifications (bottom-up analysis) in a power-conscious manner. Both top-down and bottom-up analysis techniques are supported by hierarchical RF behavioral models. The relationship between module-level tuning parameters and module-level performance metrics changes from instance to instance depending on amount of process variation and this factor is included in the proposed tuning procedure. A key benefit of the proposed approach is that only a single-test application is needed at a fixed number of tuning knob settings resulting in reduction in tuning time. The efficiency of the proposed technique is demonstrated using simulation results and hardware experiment.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A 2.16-μW Low-Power Continuous-Time Delta–Sigma Modulator With Improved-Linearity <italic>Gₘ</italic> for Wearable ECG Application This brief presents a 2.16 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> low-power third-order single-bit continuous-time delta-sigma modulator (CTDSM) for electrocardiogram (ECG) signal acquisition application. The proposed CTDSM uses an active-RC (A-RC) integrator as a first integrator and an improved linearized <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$G_{m}$ </tex-math></inline-formula> -C integrator for the second and third integrators, respectively. The mixed-integrator structure helps to mitigate the trade-offs between power consumption and resolution. Moreover, a source-degenerated auxiliary differential pair circuit is used for the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$G_{m}$ </tex-math></inline-formula> -C integrators to improve their linearity. By using A-RC and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$G_{m}$ </tex-math></inline-formula> -C integrators together along with an improved-linearized <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$G_{m}$ </tex-math></inline-formula> block, the total power consumption was measured as 2.16 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> with a peak signal-to-noise ratio of 80.1 dB, signal-to-noise, and distortion ratio of 78.4 dB, and a dynamic range of 81.4 dB with a bandwidth of 250 Hz. Furthermore, a real-time ECG signal was successfully captured in an ECG acquisition system that consisted a heart-rate sensor and a signal acquisition circuit, including the proposed CTDSM.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Sparsity-aware estimation of CDMA system parameters The number of active users, their timing offsets, and their (possibly dispersive) channels with the access point are performance-critical parameters for wireless code division multiple access (CDMA). Estimating them as accurately as possible using as short as possible training sequences can markedly improve error performance as well as the capacity of CDMA systems. The fresh look advocated here permeates benefits from recent advances in variable selection and compressive sampling approaches to multiuser communications by casting estimation of these parameters as a sparse linear regression problem. Novel estimators are developed by exploiting two forms of sparsity present: the first emerging from user (in)activity, and the second from the uncertainty on user delays and channel taps. Simulated tests demonstrate a large gain in performance when sparsity-aware estimators of CDMA parameters are compared to sparsity-agnostic standard least-squares-based alternatives.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
A 0.5 V 1.1 MS/sec 6.3 fJ/Conversion-Step SAR-ADC With Tri-Level Comparator in 40 nm CMOS This paper presents an extremely low-voltage operation and power efficient successive-approximation-register (SAR) analog-to-digital converter (ADC). Tri-level comparator is proposed to relax the speed requirement of the comparator and decrease the resolution of internal Digital-to-Analog Converter (DAC) by 1-bit. The internal charge redistribution DAC employs unit capacitance of 0.5 fF and ADC operates at nearly thermal noise limitation. To deal with the problem of capacitor mismatch, reconfigurable capacitor array and calibration procedure were developed. The prototype ADC fabricated using 40 nm CMOS process achieves 46.8 dB SNDR and 58.2 dB SFDR with 1.1 MS/sec at 0.5 V power supply. The FoM is 6.3-fJ/conversion step and the chip die area is only 160 μm × 70 μm.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Time interleaved C-2C SAR ADC with background timing skew calibration in 65nm CMOS This paper presents a 5GS/s 8-bit 40-way time-interleaved SAR ADC fabricated in 65nm CMOS. Two-level hierarchical interleaving is employed, resulting in 4 sub-ADCs each operating at 1.25GS/s at the topmost level with front-end track and hold samplers. The sub-ADCs use capacitive C-2C DACs to minimize the input capacitance and area. A novel background timing skew calibration method is used which requires no redundant signal paths. After calibration, the ADC achieves an SNDR of 33.3dB at Nyquist and consumes 138.6mW from a 1V supply with a 5GS/s sampling rate, yielding an FOM of 738fJ/conv-step. The individual sub-ADC achieves an SNDR of 37.9dB at Nyquist and consumes 34.2mW, yielding an FOM of 428fJ/conv-step.
Improved design of digital fractional-order differentiators using fractional sample delay. In this paper, a digital fractional-order differentiator (FOD) is designed by using fractional sample delay. To improve the design accuracy of conventional fractional differencing and Tustin design methods at high frequency regions, the integer delay is replaced by fractional sample delay. By using the well-documented finite-impulse-response Lagrange, infinite impulse response allpass, and Farrow ...
Digitally Enhanced Wideband I/Q Downconversion Receiver With 2-Channel Time-Interleaved ADCs The interesting concept of employing in-phase/quadrature (I/Q) downconversion together with time-interleaved analog-to-digital converters allows digitizing very wide instantaneous radio-frequency (RF) bandwidths (BWs) and grants enhanced flexibility in accessing and processing the RF spectrum. Such a structure also inevitably suffers from performance degradation due to the analog components' nonidealities, e.g., frequency response mismatches (FRMs), that ultimately lead to spurious mismatch components that limit the system's dynamic range. Available solutions developed for I/Q mismatches or time-interleaving mismatches alone are not compatible for correcting the FRM spurs in such joint I/Q time-interleaved converter (IQ-TIC) architecture. This brief proposes novel blind FRM identification and correction solutions that are able to suppress all the associated spurs in the considered wideband IQ-TIC architecture. The proposed digital correction solutions are tested and verified using measured hardware data obtained from an experimental platform, exhibiting good FRM spur correction performance with instantaneous BW on the order of 800 MHz. These developments pave the way toward 5G radio communication devices and systems where instantaneous BWs on the order of 1 GHz are envisioned at centimeter- and millimeter-wave frequency bands.
A 10-Bit 600-MS/s Time-Interleaved SAR ADC With Interpolation-Based Timing Skew Calibration. This brief presents a 10-bit 600 MS/s 4-channel time-interleaved (TI) successive approximation register analog-to-digital converter (ADC). A background calibration algorithm using Lagrange polynomial interpolation is introduced to calibrate timing skew. It consists of digital detection and adaptive derivative-based correction, employing low filter taps and resulting in hardware reduction. Two refe...
A 1.6-GS/s 12.2-mW Seven-/Eight-Way Split Time-Interleaved SAR ADC Achieving 54.2-dB SNDR With Digital Background Timing Mismatch Calibration This article presents a split time-interleaved (TI) successive-approximation register (SAR) analog-to-digital converter (ADC) with digital background timing-skew mismatch calibration. It divides a TI-SAR ADC into two split parts with the same overall sampling rate but different numbers of TI channels. Benefitting from the proposed split TI topology, the timing-skew calibration convergence speed is fast without any extra analog circuits. The input impedance of the overall TI-ADC remains unchanged, which is essential for the preceding driving stage in a high-speed application. We designed a prototype seven-/eight-way split TI-ADC implemented in 28-nm CMOS. After a digital background timing-skew calibration, it reaches a 54.2-dB signal-to-noise-and-distortion ratio (SNDR) and 67.1-dB spurious free dynamic range (SFDR) with a near Nyquist rate input signal and a 2.5-GHz effective resolution bandwidth (ERBW). Furthermore, the power consumption of ADC core (mismatch calibration off-chip) is 12.2-mW running at 1.6 GS/s, leading to a Walden figure-of-merit (FOM) of 18.2 fJ/conv.-step and a Schreier FOM of 162.4 dB, respectively.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Directed diffusion: a scalable and robust communication paradigm for sensor networks Advances in processor, memory and radio technology will enable small and cheap nodes capable of sensing, communication and computation. Networks of such nodes can coordinate to perform distributed sensing of environmental phenomena. In this paper, we explore the directed diffusion paradigm for such coordination. Directed diffusion is datacentric in that all communication is for named data. All nodes in a directed diffusion-based network are application-aware. This enables diffusion to achieve energy savings by selecting empirically good paths and by caching and processing data in-network. We explore and evaluate the use of directed diffusion for a simple remote-surveillance sensor network.
ImageNet Large Scale Visual Recognition Challenge. The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.
DieHard: probabilistic memory safety for unsafe languages Applications written in unsafe languages like C and C++ are vulnerable to memory errors such as buffer overflows, dangling pointers, and reads of uninitialized data. Such errors can lead to program crashes, security vulnerabilities, and unpredictable behavior. We present DieHard, a runtime system that tolerates these errors while probabilistically maintaining soundness. DieHard uses randomization and replication to achieve probabilistic memory safety by approximating an infinite-sized heap. DieHard's memory manager randomizes the location of objects in a heap that is at least twice as large as required. This algorithm prevents heap corruption and provides a probabilistic guarantee of avoiding memory errors. For additional safety, DieHard can operate in a replicated mode where multiple replicas of the same application are run simultaneously. By initializing each replica with a different random seed and requiring agreement on output, the replicated version of Die-Hard increases the likelihood of correct execution because errors are unlikely to have the same effect across all replicas. We present analytical and experimental results that show DieHard's resilience to a wide range of memory errors, including a heap-based buffer overflow in an actual application.
A Clustering Scheme For Hierarchical Control In Mufti-Hop Wireless Networks In this paper we present a clustering scheme to create a hierarchical control structure for mufti-hop wireless networks. A cluster is defined as a subset of vertices, whose induced graph is connected. In addition, a cluster is required to obey certain constraints that are useful for management and scalability of the hierarchy. All these constraints cannot be met simultaneously for general graphs, but we show how such a clustering can be obtained for wireless network topologies. Finally, we present an efficient distributed implementation of our clustering algorithm for a set of wireless nodes to create the set of desired clusters.
A survey of state and disturbance observers for practitioners This paper gives a unified and historical review of observer design for the benefit of practitioners. It is unified in the sense that all observers are examined in terms of: 1) the assumed dynamic structure of the plant; 2) the required information, including the input signals and modeling information of the plant; and 3) the implementation equation of the observer. This allows a practitioner, with a particular observer design problem in mind, to quickly find a suitable solution. The review is historical in the sense that it follows the evolution of ideas in observer design in the last half century. From the distinction in problem formulation, required modeling information and the observer design goal, we can see two schools of thought: one is developed in the framework of modern control theory; the other is based on disturbance estimation, which has been, to some extent, overlooked
Highly sensitive Hall magnetic sensor microsystem in CMOS technology A highly sensitive magnetic sensor microsystem based on a Hall device is presented. This microsystem consists of a Hall device improved by an integrated magnetic concentrator and new circuit architecture for the signal processing. It provides an amplification of the sensor signal with a resolution better than 30 /spl mu/V and a periodic offset cancellation while the output of the microsystem is av...
3.4 A 36Gb/s PAM4 transmitter using an 8b 18GS/S DAC in 28nm CMOS At data rates beyond 10Gb/s, most wireline links employ NRZ signaling. Serial NRZ links as high as 56Gb/s and 60Gb/s have been reported [1]. Nevertheless, as the rate increases, the constraints imposed by the channel, package, and die become more severe and do not benefit from process scaling in the same fashion that circuit design does. Reflections from impedance discontinuities in the PCB and package caused by vias and connectors introduce significant signal loss and distortions at higher frequencies. Even with an ideal channel, at every package-die interface, there is an intrinsic parasitic capacitance due to the pads and the ESD circuit amounting to at least 150fF, and a 50Ω resistor termination at both the transmit and receive ends resulting in an intrinsic pole at 23GHz or lower. In light of all these limitations, serial NRZ signaling beyond 60Gb/s appears suboptimal in terms of both power and performance. Utilizing various modulation techniques such as PAM4, one can achieve a higher spectral efficiency [2]. To enable such transmission formats, high-speed moderate-resolution data converters are required. This paper describes a 36Gb/s transmitter based on an 18GS/s 8b DAC implemented in 28nm CMOS, compliant to the new IEEE802.3bj standard for 100G Ethernet over backplane and copper cables [3].
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.2
0.2
0.2
0.1
0.066667
0
0
0
0
0
0
0
0
0
$mathcal{L}_2$ State Estimation with Guaranteed Convergence Speed in the Presence of Sporadic Measurements This paper deals with the problem of estimating the state of a nonlinear time-invariant system in the presence of sporadically available measurements external perturbations. An observer with a continuous intersample injection term is proposed. Such an intersample injection is provided by a linear dynamical system, whose state is reset to the measured output estimation error at each sampling time. The resulting system is augmented with a timer triggering the arrival of a new measurement analyzed in a hybrid system framework. The design of the observer is performed to achieve global exponential stability with a given decay rate to a set wherein the estimation error is equal to zero. Robustness with respect to external perturbations and $mathcal{L}_2$ -external stability from the plant perturbation to a given performance output are considered. Moreover, computationally efficient algorithms based on the solution to linear matrix inequalities are proposed to design the observer. Finally, the effectiveness of the proposed methodology is shown in an example.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Unconditionally stable meshless integration of time-domain Maxwell's curl equations. Grid based methods coupled with an explicit approach for the evolution in time are traditionally adopted in solving PDEs in computational electromagnetics. The discretization in space with a grid covering the problem domain and a stability step size restriction, must be accepted. Evidence is given that efforts need for overcoming these heavy constraints. The connectivity laws among the points scattered in the problem domain can be avoided by using meshless methods. Among these, the smoothed particle electromagnetics, gives an interesting answer to the problem, overcoming the limit of the grid generation. In the original formulation an explicit integration scheme is used providing, spatial and time discretization strictly interleaved and mutually conditioned. In this paper a formulation of the alternating direction implicit scheme is proposed into the meshless framework. The developed formulation preserves the leapfrog marching on in time of the explicit integration scheme. Studies on the systems matrices arising at each temporal step, are reported referring to the meshless discretization. The new method, not constrained by a grid in space and unconditionally stable in time, is validated by numerical simulations.
Electrical analogous in viscoelasticity •Mechanical models of materials viscoelasticity behavior are approached by fractional calculus.•Electrical analogous circuits of fractional hereditary materials are proposed.•Validation is demonstrated by using modal analysis.•Electrical analogous can help in better revealing the real behavior of fractional hereditary materials.
Partial Discharge Detection Using a Spherical Electromagnetic Sensor. The presence of a partial discharge phenomenon in an electrical apparatus is a warning signal that could determine the failure of the insulation system, terminating the service of the apparatus and/or the network. In this paper, an innovative partial discharge (PD) measurement instrument based on an antenna sensor is presented and analyzed. Being non-intrusive is one of the most relevant features of the sensor. The frequency response of the antenna sensor and the features to recognize different PD sources and automatically synchronize them with the supply voltage are described and discussed in details. The results show the performance of the instrument can make a fast and correct diagnosis of the health state of insulation systems.
EMI filter design in motor drives with Common Mode voltage active compensation In this paper the design issues of input electromagnetic interference (EMI) filters for inverter-fed motor drives including motor Common Mode (CM) voltage active compensation are studied. A coordinated design of motor CM-voltage active compensator and input EMI filter allows the drive system to comply with EMC standards and to yield an increased reliability at the same time. Two CM input EMI filters are built and compared. They are, designed, respectively, according to the conventional design procedure and considering the actual impedance mismatching between EMI source and receiver. In both design procedures, the presence of the active compensator is taken into account. The experimental evaluation of both filters' performance is given in terms of compliance of the system to standard limits.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Cache operations by MRU change The performance of set associative caches is analyzed. The method used is to group the cache lines into regions according to their positions in the replacement stacks of a cache, and then to observe how the memory access of a CPU is distributed over these regions. Results from the preserved CPU traces show that the memory accesses are heavily concentrated on the most recently used (MRU) region in the cache. The concept of MRU change is introduced; the idea is to use the event that the CPU accesses a non-MRU line to approximate the time the CPU is changing its working set. The concept is shown to be useful in many aspects of cache design and performance evaluation, such as comparison of various replacement algorithms, improvement of prefetch algorithms, and speedup of cache simulation.
Achievable rates in cognitive radio channels Cognitive radio promises a low-cost, highly flexible alternative to the classic single-frequency band, single-protocol wireless device. By sensing and adapting to its environment, such a device is able to fill voids in the wireless spectrum and can dramatically increase spectral efficiency. In this paper, the cognitive radio channel is defined as a two-sender, two-receiver interference channel in which sender 2 obtains the encoded message sender 1 plans to transmit. We consider two cases: in the genie-aided cognitive radio channel, sender 2 is noncausally presented the data to be transmitted by sender 1 while in the causal cognitive radio channel, the data is obtained causally. The cognitive radio at sender 2 may then choose to transmit simultaneously over the same channel, as opposed to waiting for an idle channel as is traditional for a cognitive radio. Our main result is the development of an achievable region which combines Gel'fand-Pinkser coding with an achievable region construction for the interference channel. In the additive Gaussian noise case, this resembles dirty-paper coding, a technique used in the computation of the capacity of the Gaussian multiple-input multiple-output (MIMO) broadcast channel. Numerical evaluation of the region in the Gaussian noise case is performed, and compared to an inner bound, the interference channel, and an outer bound, a modified Gaussian MIMO broadcast channel. Results are also extended to the case in which the message is causally obtained.
Gradient-Based Learning Applied to Document Recognition Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper rev...
Understanding Availability This paper addresses a simple, yet fundamental question in the design of peer-to-peer systems: What does it mean when we say "availability" and how does this understand- ing impact the engineering of practical systems? We ar- gue that existing measurements and models do not capture the complex time-varying nature of availability in today's peer-to-peer environments. Further, we show that unfore- seen methodological shortcomings have dramatically biased previous analyses of this phenomenon. As the basis of our study, we empirically characterize the availability of a large peer-to-peer system over a period of 7 days, analyze the de- pendence of the underlying availability distributions, mea- sure host turnover in the system, and discuss how these re- sults may affect the design of high-availability peer-to-peer services.
Data Space Randomization Over the past several years, US-CERT advisories, as well as most critical updates from software vendors, have been due to memory corruption vulnerabilities such as buffer overflows, heap overflows, etc. Several techniques have been developed to defend against the exploitation of these vulnerabilities, with the most promising defenses being based on randomization. Two randomization techniques have been explored so far: address space randomization (ASR) that randomizes the location of objects in virtual memory, and instruction set randomization (ISR) that randomizes the representation of code. We explore a third form of randomization called data space randomization (DSR) that randomizes the representation of data stored in program memory. Unlike ISR, DSR is effective against non-control data attacks as well as code injection attacks. Unlike ASR, it can protect against corruption of non-pointer data as well as pointer-valued data. Moreover, DSR provides a much higher range of randomization (typically 232 for 32-bit data) as compared to ASR. Other interesting aspects of DSR include (a) it does not share a weakness common to randomization-based defenses, namely, susceptibility to information leakage attacks, and (b) it is capable of detecting some exploits that are missed by full bounds-checking techniques, e.g., some of the overflows from one field of a structure to the next field. Our implementation results show that with appropriate design choices, DSR can achieve a performance overhead in the range of 5% to 30% for a range of programs.
Online design bug detection: RTL analysis, flexible mechanisms, and evaluation Higher level of resource integration and the addition of new features in modern multi-processors put a significant pressure on their verification. Although a large amount of resources and time are devoted to the verification phase of modern processors, many design bugs escape the verification process and slip into processors operating in the field. These design bugs often lead to lower quality products, lower customer satisfaction, diminishing brand/company reputation, or even expensive product recalls.
IEEE 802.11 wireless LAN implemented on software defined radio with hybrid programmable architecture This paper describes a prototype software defined radio (SDR) transceiver on a distributed and heterogeneous hybrid programmable architecture; it consists of a central processing unit (CPU), digital signal processors (DSPs), and pre/postprocessors (PPPs), and supports both Personal Handy Phone System (PHS), and IEEE 802.11 wireless local area network (WLAN). It also supports system switching between PHS and WLAN and over-the-air (OTA) software downloading. In this paper, we design an IEEE 802.11 WLAN around the SDR; we show the software architecture of the SDR prototype and describe how it handles the IEEE 802.11 WLAN protocol. The medium access control (MAC) sublayer functions are executed on the CPU, while the physical layer (PHY) functions such as modulation/demodulation are processed by the DSPs; higher speed digital signal processes are run on the PPP implemented on a field-programmable gate array (FPGA). The most difficult problem in implementing the WLAN in this way is meeting the short interframe space (SIFS) requirement of the IEEE 802.11 standard; we elucidate the potential weakness of the current configuration and specify a way of implementing the IEEE 802.11 protocol that avoids this problem. This paper also describes an experimental evaluation of the prototype for WLAN use, the results of which agree well with computer-simulation results.
Understanding contention-based channels and using them for defense Microarchitectural resources such as caches and predictors can be used to leak information across security domains. Significant prior work has demonstrated attacks and defenses for specific types of such microarchitectural side and covert channels. In this paper, we introduce a general mathematical study of microarchitectural channels using information theory. Our conceptual contribution is a simple mathematical abstraction that captures the common characteristics of all microarchitectural channels. We call this the Bucket model and it reveals that microarchitectural channels are fundamentally different from side and covert channels in networking. We then quantify the communication capacity of several microarchitectural covert channels (including channels that rely on performance counters, AES hardware and memory buses) and measure bandwidths across both KVM based heavy-weight virtualization and light-weight operating-system level isolation. We demonstrate channel capacities that are orders of magnitude higher compared to what was previously considered possible. Finally, we introduce a novel way of detecting intelligent adversaries that try to hide while running covert channel eavesdropping attacks. Our method generalizes a prior detection scheme (that modeled static adversaries) by introducing noise that hides the detection process from an intelligent eavesdropper.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.2
0.2
0.2
0.1
0
0
0
0
0
0
0
0
0
0
Edge-Centric Distributed Discovery and Access in the Internet of Things. Massive diffusion of constrained devices communicating through low-power wireless technologies in the future Internet of Things (IoT) will require in many scenarios the deployment of IoT gateways to allow applications to discover and access IoT resources. In this context, Fog/Edge computing will represent an opportunity to deploy IoT gateways in proximity of IoT domains, meeting the requirements o...
Router-Based Brokering for Surrogate Discovery in Edge Computing In-network processing pushes computational capabilities closer to the edge of the network, enabling new kinds of location-aware, real-time applications, while preserving bandwidth in the core network. This is done by offloading computations to more powerful or energy-efficient surrogates that are opportunistically available at the network edge. In mobile and heterogeneous usage contexts, the question arises how a client can discover the most appropriate surrogate in the network for offloading a task. In this paper, we propose a brokering mechanism that matches a client with the best available surrogate, based on specified requirements and capabilities. The broker is implemented on standard home routers, and thus, leverages the ubiquity of such devices in urban environments. To motivate the feasibility of this approach, we conduct a coverage analysis based on collected access point locations in a major city. Furthermore, the brokering functionality introduces only a minimal resource overhead on the routers and can significantly reduce the latency compared to distant, cloud-based solutions.
State Machine Replication for the Masses with BFT-SMART The last fifteen years have seen an impressive amount of work on protocols for Byzantine fault-tolerant (BFT) state machine replication (SMR). However, there is still a need for practical and reliable software libraries implementing this technique. BFT-SMART is an open-source Java-based library implementing robust BFT state machine replication. Some of the key features of this library that distinguishes it from similar works (e.g., PBFT and UpRight) are improved reliability, modularity as a first-class property, multicore-awareness, reconfiguration support and a flexible programming interface. When compared to other SMR libraries, BFT-SMART achieves better performance and is able to withstand a number of real-world faults that previous implementations cannot.
An Object Store Service for a Fog/Edge Computing Infrastructure Based on IPFS and a Scale-Out NAS Fog and Edge Computing infrastructures have been proposed to address the latency issue of the current Cloud Computing platforms. While a couple of works illustrated the advantages of these infrastructures in particular for the Internet of Things (IoT) applications, elementary Cloud services that can take advantage of the geo-distribution of resources have not been proposed yet. In this paper, we propose a first-class object store service for Fog/Edge facilities. Our proposal is built with Scale-out Network Attached Storage systems (NAS) and IPFS, a BitTorrent-based object store spread throughout the Fog/Edge infrastructure. Without impacting the IPFS advantages particularly in terms of data mobility, the use of a Scale-out NAS on each site reduces the inter-site exchanges that are costly but mandatory for the metadata management in the original IPFS implementation. Several experiments conducted on Grid'5000 testbed are analyzed and confirmed, first, the benefit of using an object store service spread at the Edge and second, the importance of mitigating inter-site accesses. The paper concludes by giving a few directions to improve the performance and fault tolerance criteria of our Fog/Edge Object Store Service.
Demo: A Proof-of-Concept Implementation of Guard Secure Routing Protocol Skip Graphs belong to the family of Distributed Hash Table (DHT) structures that are utilized as routing overlays in various peer-to-peer applications including blockchains, cloud storage, and social networks. In a Skip Graph overlay, any misbehavior of peers during the routing of a query compromises the system functionality. Guard is the first authenticated search mechanism for Skip Graphs, enables reliable search operation in a fully decentralized manner. In this demo paper, we present a proof-of-concept implementation of Guard on Skip Graph nodes as well as a deployment demo scenario.
Fog Computing: A Taxonomy, Survey and Future Directions. In recent years, the number of Internet of Things (IoT) devices/sensors has increased to a great extent. To support the computational demand of real-time latency-sensitive applications of largely geo-distributed IoT devices/sensors, a new computing paradigm named “Fog computing” has been introduced. Generally, Fog computing resides closer to the IoT devices/sensors and extends the Cloud-based computing, storage and networking facilities. In this chapter, we comprehensively analyse the challenges in Fogs acting as an intermediate layer between IoT devices/sensors and Cloud datacentres and review the current developments in this field. We present a taxonomy of Fog computing according to the identified challenges and its key features. We also map the existing works to the taxonomy in order to identify current research gaps in the area of Fog computing. Moreover, based on the observations, we propose future directions for research.
ImageNet Large Scale Visual Recognition Challenge. The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.
Impossibility of distributed consensus with one faulty process The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the “Byzantine Generals” problem.
A Logic-in-Memory Computer If, as presently projected, the cost of microelectronic arrays in the future will tend to reflect the number of pins on the array rather than the number of gates, the logic-in-memory array is an extremely attractive computer component. Such an array is essentially a microelectronic memory with some combinational logic associated with each storage element. A logic-in-memory computer is described that is organized around a logic-enhanced ``cache'' memory array. Used as a cache, a logic-in-memory array performs as a high-speed buffer between a conventional CPU and a conventional memory. The effect on the computer system of the cache and its control mechanism is to make the main memory appear to have all of the processing capabilities and almost the same performance as the cache. Operations within the array are naturally organized as operations on blocks of data called ``sectors.'' Among the operations that can be performed are arithmetic and logical operations on pairs of elements from two sectors, and a variety of associative search operations on a single sector. For such operations, the main memory of the computer appears to the program to be composed of a collection of logic-in-memory arrays, each the size of a sector. Because of the high-speed, highly parallel sector operations, the logic-in-memory computer points to a new direction for achieving orders of magnitude increase in computer performance. Moreover, since the computer is specifically organized for large-scale integration, the increased performance might be obtained for a comparatively small dollar cost.
Barrier certificates for nonlinear model validation Methods for model validation of continuous-time nonlinear systems with uncertain parameters are presented in this paper. The methods employ functions of state-parameter-time, termed barrier certificates, whose existence proves that a model and a feasible parameter set are inconsistent with some time-domain experimental data. A very large class of models can be treated within this framework; this includes differential-algebraic models, models with memoryless/dynamic uncertainties, and hybrid models. Construction of barrier certificates can be performed by convex optimization, utilizing recent results on the sum of squares decomposition of multivariate polynomials.
Cache Games -- Bringing Access-Based Cache Attacks on AES to Practice Side channel attacks on cryptographic systems exploit information gained from physical implementations rather than theoretical weaknesses of a scheme. In recent years, major achievements were made for the class of so called access-driven cache attacks. Such attacks exploit the leakage of the memory locations accessed by a victim process. In this paper we consider the AES block cipher and present an attack which is capable of recovering the full secret key in almost real time for AES-128, requiring only a very limited number of observed encryptions. Unlike previous attacks, we do not require any information about the plaintext (such as its distribution, etc.). Moreover, for the first time, we also show how the plaintext can be recovered without having access to the cipher text at all. It is the first working attack on AES implementations using compressed tables. There, no efficient techniques to identify the beginning of AES rounds is known, which is the fundamental assumption underlying previous attacks. We have a fully working implementation of our attack which is able to recover AES keys after observing as little as 100 encryptions. It works against the OpenS SL 0.9.8n implementation of AES on Linux systems. Our spy process does not require any special privileges beyond those of a standard Linux user. A contribution of probably independent interest is a denial of service attack on the task scheduler of current Linux systems (CFS), which allows one to observe (on average) every single memory access of a victim process.
A dynamically reconfigurable system-on-a-chip architecture for future mobile digital signal processing The evolving of current and future broadband access techniques into the wireless domain introduces new and flexible network architectures with difficult and interesting challenges. The system designers are faced with a challenging set of problems that stem from access mechanisms, energy conservation, error rate, transmission speed characteristics of the wireless links and mobility aspects. This paper presents first the major challenges in realizing flexible microelectronic system solutions for digital baseband signal processing in future mobile communication applications. Based thereupon, the architecture design of flexible system-on-a-chip solutions is discussed. The focus of the paper is the introduction of a new parallel and dynamically reconfigurable hardware architecture tailored to this application area. Its performance issues and potential are discussed by the implementation of a flexible and computation-intensive component of future mobile terminals.
Understanding the regenerative comparator circuit The regenerative comparator circuit which lies at the heart of A/D conversion, slicer circuits, and memory sensing, is unstable, time-varying, nonlinear, and with multiple equilibria. That does not mean, as this paper shows, that it cannot be understood with simple equivalent circuits that reveal its dynamics completely, and enable it to be designed to specifications on static and dynamic offset and noise. The analysis is applied to the StrongArm latch.
Multi-Channel Neural Recording Implants: A Review. The recently growing progress in neuroscience research and relevant achievements, as well as advancements in the fabrication process, have increased the demand for neural interfacing systems. Brain-machine interfaces (BMIs) have been revealed to be a promising method for the diagnosis and treatment of neurological disorders and the restoration of sensory and motor function. Neural recording implants, as a part of BMI, are capable of capturing brain signals, and amplifying, digitizing, and transferring them outside of the body with a transmitter. The main challenges of designing such implants are minimizing power consumption and the silicon area. In this paper, multi-channel neural recording implants are surveyed. After presenting various neural-signal features, we investigate main available neural recording circuit and system architectures. The fundamental blocks of available architectures, such as neural amplifiers, analog to digital converters (ADCs) and compression blocks, are explored. We cover the various topologies of neural amplifiers, provide a comparison, and probe their design challenges. To achieve a relatively high SNR at the output of the neural amplifier, noise reduction techniques are discussed. Also, to transfer neural signals outside of the body, they are digitized using data converters, then in most cases, the data compression is applied to mitigate power consumption. We present the various dedicated ADC structures, as well as an overview of main data compression methods.
1.10525
0.10525
0.1
0.1
0.1
0.05525
0.00022
0
0
0
0
0
0
0
A Mostly Digital VCO-Based CT-SDM With Third-Order Noise Shaping. This paper presents the architectural concept and implementation of a mostly digital voltage-controlled oscillator-analog-to-digital converter (VCO-ADC) with third-order quantization noise shaping. The system is based on the combination of a VCO and a digital counter. It is shown how this combination can function as a continuous-time integrator to form a high-order continuous-time sigma-delta modu...
Signal Folding in A/D Converters Signal folding appears in A/D converters (ADCs) in various ways. In this paper, the evolution of this technique is derived from the fundamentals of quantization to obtain systematic insights. We look upon folding as an automatic multiplexing of zero crossings, which simplifies hardware while preserving the high speed and low latency of a flash ADC. By appreciating similarities between the well-kno...
A 45 nm Resilient Microprocessor Core for Dynamic Variation Tolerance A 45 nm microprocessor core integrates resilient error-detection and recovery circuits to mitigate the clock frequency (FCLK) guardbands for dynamic parameter variations to improve throughput and energy efficiency. The core supports two distinct error-detection designs, allowing a direct comparison of the relative trade-offs. The first design embeds error-detection sequential (EDS) circuits in critical paths to detect late timing transitions. In addition to reducing the Fclk guardbands for dynamic variations, the embedded EDS design can exploit path-activation rates to operate the microprocessor faster than infrequently-activated critical paths. The second error-detection design offers a less-intrusive approach for dynamic timing-error detection by placing a tunable replica circuit (TRC) per pipeline stage to monitor worst-case delays. Although the TRCs require a delay guardband to ensure the TRC delay is always slower than critical-path delays, the TRC design captures most of the benefits from the embedded EDS design with less implementation overhead. Furthermore, while core min-delay constraints limit the potential benefits of the embedded EDS design, a salient advantage of the TRC design is the ability to detect a wider range of dynamic delay variation, as demonstrated through low supply voltage (VCC) measurements. Both error-detection designs interface with error-recovery techniques, enabling the detection and correction of timing errors from fast-changing variations such as high-frequency VCC droops. The microprocessor core also supports two separate error-recovery techniques to guarantee correct execution even if dynamic variations persist. The first technique requires clock control to replay errant instructions at 1/2FCLK. In comparison, the second technique is a new multiple-issue instruction replay design that corrects errant instructions with a lower performance penalty and without requiring clock control. Silico- - n measurements demonstrate that resilient circuits enable a 41% throughput gain at equal energy or a 22% energy reduction at equal throughput, as compared to a conventional design when executing a benchmark program with a 10% VCC droop. In addition, the microprocessor includes a new adaptive clock control circuit that interfaces with the resilient circuits and a phase-locked loop (PLL) to track recovery cycles and adapt to persistent errors by dynamically changing Fclk f°Γ maximum efficiency.
A Pulse Frequency Modulation Interpretation of VCOs Enabling VCO-ADC Architectures With Extended Noise Shaping. In this paper, we propose to study voltage controlled oscillators (VCOs) based on the equivalence with pulse frequency modulators (PFMs). This approach is applied to the analysis of VCO-based analog-to-digital converters (VCO-ADCs) and deviates significantly from the conventional interpretation, where VCO-ADCs have been described as the first-order ΔΣ modulators. A first advantage of our approach ...
A 5-GS/s 7.2-ENOB Time-Interleaved VCO-Based ADC Achieving 30.5 fJ/cs This article presents an eight-channel time-interleaved voltage-controlled oscillator (VCO)-based analog-to-digital converter (ADC), achieving 7.2 effective number of bits (ENOBs) at 5 GS/s in 28-nm CMOS. A high-speed ring oscillator with feedforward cross-coupling and a shared tail transistor is combined with an asynchronous counter in order to improve the resolution while minimizing the power co...
A Four-Channel Beamforming Down-Converter in 90-nm CMOS Utilizing Phase-Oversampling In this paper, a 4-GHz, four-channel, analog-beamforming direct-conversion down-converter in 90-nm CMOS is presented. Down-converting vector modulators (VMs) in each channel multiply the inputs with complex beamforming weights before summation between the different channels. The VMs are based on a phase-oversampling technique that allows the synthesis of inherently linear, high-resolution complex gains without complex variable gain amplifiers. A bank of simple passive mixers driven by a multiphase local oscillator (LO) in each VM performs accurate phase shifting with minimal signal distortion, and a pair of transimpedance amplifiers (TIAs) combines the mixer outputs to perform beamforming weighting and combining. Each individual channel achieves 360° phase shift and gain-setting programmability with 8-bit digital control, a complex gain constellation with a mean error-vector magnitude (EVM) of <;2%, and a measured phase error of <; 5.5° at a back-off of 4 dB from the maximum gain setting. The beamformer demonstrates >24-dB blocker rejection for blockers impinging from different directions and 17-dB signal EVM improvement in the presence of an in-channel blocker.
Phase averaging and interpolation using resistor strings or resistor rings for multi-phase clock generation Circuit techniques using resistor strings (R-strings) and resistor rings (R-rings) for phase averaging and interpolation are described. Phase averaging can reduce phase errors, and phase interpolation can increase the number of available phases. In addition to the waveform shape, the averaging and the interpolation performances of the R-strings and R-rings are determined by the clock frequency normalized by a RC time constant of the circuits. To attain better phase accuracy, a smaller RC time constant is required, but at the expense of larger power dissipation. To demonstrate the resistor ring's capability of phase averaging and interpolation, a 125-MHz 8-bit digital-to-phase converter (DPC) was designed and fabricated using a standard 0.35-μm SPQM CMOS technology. Measurement results show that the DPC attains 8-bit resolution using the proposed phase averaging and interpolation technique.
Max-Min D-Cluster Formation in Wireless Ad Hoc Networks An ad hoc network may be logically represented as a set of clusters. The clusterheads form a -hop dominating set. Each node is at most hops from a clusterhead. Clusterheads form a virtual backbone and may be used to route packets for nodes in their cluster. Previous heuristics restricted themselves to -hop clusters. We show that the minimum -hop dominating set problem is NP-complete. Then we present a heuristic to form -clusters in a wireless ad hoc network. Nodes are assumed to have non-deterministic mobility pattern. Clusters are formed by diffusing node identities along the wireless links. When the heuristic terminates, a node either becomes a clusterhead, or is at most wireless hops away from its clusterhead. The value of is a parameter of the heuristic. The heuristic can be run either at regular intervals, or whenever the network configura- tion changes. One of the features of the heuristic is that it t ends to re-elect existing clusterheads even when the network configuration c hanges. This helps to reduce the communication overheads during transition from old clusterheads to new clusterheads. Also, there is a tendency to evenly dis- tribute the mobile nodes among the clusterheads, and evently distribute the responsibility of acting as clusterheads among all nodes. Thus, the heuristic is fair and stable. Simulation experiments demonstrate that the proposed heuristic is better than the two earlier heuristics, namely the LCA (1) and Degree based (11) solutions.
A Case for Intelligent RAM Two trends call into question the current practice of microprocessors and DRAMs being fabricated as different chips on different fab lines: 1) the gap between processor and DRAM speed is growing at 50% per year; and 2) the size and organization of memory on a single DRAM chip is becoming awkward to use in a system, yet size is growing at 60% per year. Intelligent RAM, or IRAM, merges processing and memory into a single chip to lower memory latency, increase memory bandwidth, and improve energy efficiency as well as to allow more flexible selection of memory size and organization. In addition, IRAM promises savings in power and board area. We review the state of microprocessors and DRAMs today, explore some of the opportunities and challenges for IRAMs, and finally estimate performance and energy efficiency of three IRAM designs.
Communication-efficient leader election and consensus with limited link synchrony We study the degree of synchrony required to implement the leader election failure detector Ω and to solve consensus in partially synchronous systems. We show that in a system with n processes and up to f process crashes, one can implement Ω and solve consensus provided there exists some (unknown) correct process with f outgoing links that are eventually timely. In the special case where f = 1 , an important case in practice, this implies that to implement Ω and solve consensus it is sufficient to have just one eventually timely link -- all the other links in the system, Θ(n2) of them, may be asynchronous. There is no need to know which link p → q is eventually timely, when it becomes timely, or what is its bound on message delay. Surprisingly, it is not even required that the source p or destination q of this link be correct: either p or q may actually crash, in which case the link p → q is eventually timely in a trivial way, and it is useless for sending messages. We show that these results are in a sense optimal: even if every process has f - 1 eventually timely links, neither Ω nor consensus can be solved. We also give an algorithm that implements Ω in systems where some correct process has f outgoing links that are eventually timely, such that eventually only f links carry messages, and we show that this is optimal. For f = 1 , this algorithm ensures that all the links, except for one, eventually become quiescent.
A 5-Gb/s ADC-Based Feed-Forward CDR in 65 nm CMOS This paper presents an ADC-based CDR that blindly samples the received signal at twice the data rate and uses these samples to directly estimate the locations of zero crossings for the purpose of clock and data recovery. We successfully confirmed the operation of the proposed CDR architecture at 5 Gb/s. The receiver is implemented in 65 nm CMOS, occupies 0.51 mm(2) and consumes 178.4 mW at 5 Gb/s.
Efficiency of a Regenerative Direct-Drive Electromagnetic Active Suspension. The efficiency and power consumption of a direct-drive electromagnetic active suspension system for automotive applications are investigated. A McPherson suspension system is considered, where the strut consists of a direct-drive brushless tubular permanent-magnet actuator in parallel with a passive spring and damper. This suspension system can both deliver active forces and regenerate power due to imposed movements. A linear quadratic regulator controller is developed for the improvement of comfort and handling (dynamic tire load). The power consumption is simulated as a function of the passive damping in the active suspension system. Finally, measurements are performed on a quarter-car test setup to validate the analysis and simulations.
Software Defined Integrated RF Frontend Receiver Design.
A 12-Bit Dynamic Tracking Algorithm-Based SAR ADC With Real-Time QRS Detection A 12-bit successive approximation register (SAR) ADC based on dynamic tracking algorithm and a real-time QRS-detection algorithm are proposed. The dynamic tracking algorithm features two tracking windows which are adjacent to prediction interval. This algorithm is able to track down the input signal's variation range and automatically adjust the subrange interval and update prediction code. QRS-complex detection algorithm integrates synchronous time sequential ADC and realtime QRS-detector. The chip is fabricated in a standard 0.13 μm CMOS process with a 0.6 V supply. Measurement results show that proposed ADC exhibits 10.72 effective number of bit (ENOB) and 79.63 dB spur-free dynamic range (SFDR) at 10k Hz sample rate given 41.5 Hz sinusoid input. The DNL and INL are bounded at -0.6/0.62 LSB and -0.67/1.43 LSBs. The ADC achieves FoM of 48 fJ/conversion-step at the best case. Also, the prototype is experimented with ECG signal input and extracts the heart beat signal successfully.
1.24
0.24
0.24
0.24
0.24
0.12
0.048
0
0
0
0
0
0
0
An Object Store Service for a Fog/Edge Computing Infrastructure Based on IPFS and a Scale-Out NAS Fog and Edge Computing infrastructures have been proposed to address the latency issue of the current Cloud Computing platforms. While a couple of works illustrated the advantages of these infrastructures in particular for the Internet of Things (IoT) applications, elementary Cloud services that can take advantage of the geo-distribution of resources have not been proposed yet. In this paper, we propose a first-class object store service for Fog/Edge facilities. Our proposal is built with Scale-out Network Attached Storage systems (NAS) and IPFS, a BitTorrent-based object store spread throughout the Fog/Edge infrastructure. Without impacting the IPFS advantages particularly in terms of data mobility, the use of a Scale-out NAS on each site reduces the inter-site exchanges that are costly but mandatory for the metadata management in the original IPFS implementation. Several experiments conducted on Grid'5000 testbed are analyzed and confirmed, first, the benefit of using an object store service spread at the Edge and second, the importance of mitigating inter-site accesses. The paper concludes by giving a few directions to improve the performance and fault tolerance criteria of our Fog/Edge Object Store Service.
State Machine Replication for the Masses with BFT-SMART The last fifteen years have seen an impressive amount of work on protocols for Byzantine fault-tolerant (BFT) state machine replication (SMR). However, there is still a need for practical and reliable software libraries implementing this technique. BFT-SMART is an open-source Java-based library implementing robust BFT state machine replication. Some of the key features of this library that distinguishes it from similar works (e.g., PBFT and UpRight) are improved reliability, modularity as a first-class property, multicore-awareness, reconfiguration support and a flexible programming interface. When compared to other SMR libraries, BFT-SMART achieves better performance and is able to withstand a number of real-world faults that previous implementations cannot.
Demystifying Fog Computing: Characterizing Architectures, Applications and Abstractions Internet of Things (IoT) has accelerated the deployment of millions of sensors at the edge of the network, through Smart City infrastructure and lifestyle devices. Cloud computing platforms are often tasked with handling these large volumes and fast streams of data from the edge. Recently, Fog computing has emerged as a concept for low-latency and resource-rich processing of these observation streams, to complement Edge and Cloud computing. In this paper, we review various dimensions of system architecture, application characteristics and platform abstractions that are manifest in this Edge, Fog and Cloud eco-system. We highlight novel capabilities of the Edge and Fog layers, such as physical and application mobility, privacy sensitivity, and a nascent runtime environment. IoT application case studies based on first-hand experiences across diverse domains drive this categorization. We also highlight the gap between the potential and the reality of Fog computing, and identify challenges that need to be overcome for the solution to be sustainable. Taken together, our article can help platform and application developers bridge the gap that remains in making Fog computing viable.
SA-Chord: A Self-Adaptive P2P Overlay Network Pure Edge Computing relies on peer-to-peer overlay networks to realize the communication backbone between participating entities. In these settings, entities are characterized by high heterogeneity, mobility, and variability, which introduce runtime uncertainty and may harm the dependability of the network. Departing from state-of-the-art solutions, overlay networks for Pure Edge Computing should take into account the dynamics of the operating environment and self-adapt their topology accordingly, in order to increase the dependability of the communication. To this end, this paper discusses the preliminary development and validation of SA-Chord, a self-adaptive version of the wellknown Chord protocol, able to adapt the network topology according to a given global goal. SA-Chord has been validated through simulation against two distinct goals: (i) minimize energy consumption and, (ii) maximize network throughput. Simulation results are promising and show how SA-Chord efficiently and effectively achieves a given goal.
A proposal of a distributed access control over Fog computing: The ITS use case Internet of Things (IoT) raises many security challenges in relation with the different applications that can be deployed over these environments. IoT access control systems must respond to the new IoT requirements such as scalability, dynamicity, real-time interaction and resources constraint. The goal of this paper is to propose an approach based on Fog and Distributed Hash Table (DHT) toward access control for the Internet of Things. To evaluate the performances of our access solution, we used NS-3 and SUMO. The preliminary obtained results show acceptable overhead for the considered Intelligent Transport System (ITS) scenario.
Fog Computing: Helping the Internet of Things Realize Its Potential. The Internet of Things (IoT) could enable innovations that enhance the quality of life, but it generates unprecedented amounts of data that are difficult for traditional systems, the cloud, and even edge computing to handle. Fog computing is designed to overcome these limitations.
Threaded code The concept of “threaded code” is presented as an alternative to machine language code. Hardware and software realizations of it are given. In software it is realized as interpretive code not needing an interpreter. Extensions and optimizations are mentioned.
Leveraging on-chip voltage regulators as a countermeasure against side-channel attacks Side-channel attacks have become a significant threat to the integrated circuit security. Circuit level techniques are proposed in this paper as a countermeasure against side-channel attacks. A distributed on-chip power delivery system consisting of multi-level switched capacitor (SC) voltage converters is proposed where the individual interleaved stages are turned on and turned off either based on the workload information or pseudo-randomly to scramble the power consumption profile. In the case that the changes in the workload demand do not trigger the power delivery system to turn on or off individual stages, the active stages are reshuffled with so called converter-reshuffling to insert random spikes in the power consumption profile. An entropy based metric is developed to evaluate the security-performance of the proposed converter-reshuffling technique as compared to three other existing on-chip power delivery schemes. The increase in the power trace entropy with CoRe scheme is also demonstrated with simulation results to further verify the theoretical analysis.
The PARSEC benchmark suite: characterization and architectural implications This paper presents and characterizes the Princeton Application Repository for Shared-Memory Computers (PARSEC), a benchmark suite for studies of Chip-Multiprocessors (CMPs). Previous available benchmarks for multiprocessors have focused on high-performance computing applications and used a limited number of synchronization methods. PARSEC includes emerging applications in recognition, mining and synthesis (RMS) as well as systems applications which mimic large-scale multithreaded commercial programs. Our characterization shows that the benchmark suite covers a wide spectrum of working sets, locality, data sharing, synchronization and off-chip traffic. The benchmark suite has been made available to the public.
Gossip-based aggregation in large dynamic networks As computer networks increase in size, become more heterogeneous and span greater geographic distances, applications must be designed to cope with the very large scale, poor reliability, and often, with the extreme dynamism of the underlying network. Aggregation is a key functional building block for such applications: it refers to a set of functions that provide components of a distributed system access to global information including network size, average load, average uptime, location and description of hotspots, and so on. Local access to global information is often very useful, if not indispensable for building applications that are robust and adaptive. For example, in an industrial control application, some aggregate value reaching a threshold may trigger the execution of certain actions; a distributed storage system will want to know the total available free space; load-balancing protocols may benefit from knowing the target average load so as to minimize the load they transfer. We propose a gossip-based protocol for computing aggregate values over network components in a fully decentralized fashion. The class of aggregate functions we can compute is very broad and includes many useful special cases such as counting, averages, sums, products, and extremal values. The protocol is suitable for extremely large and highly dynamic systems due to its proactive structure---all nodes receive the aggregate value continuously, thus being able to track any changes in the system. The protocol is also extremely lightweight, making it suitable for many distributed applications including peer-to-peer and grid computing systems. We demonstrate the efficiency and robustness of our gossip-based protocol both theoretically and experimentally under a variety of scenarios including node and communication failures.
Linear Amplification with Nonlinear Components A technique for producing bandpass linear amplification with nonlinear components (LINC) is described. The bandpass signal first is separated into two constant envelope component signals. All of the amplitude and phase information of the original bandpass signal is contained in phase modulation on the component signals. These constant envelope signals can be amplified or translated in frequency by amplifiers or mixers which have nonlinear input-output amplitude transfer characteristics. Passive linear combining of the amplified and/or translated component signals produces an amplified and/or translated replica of the original signal.
Opportunistic Information Dissemination in Mobile Ad-hoc Networks: The Profit of Global Synchrony The topic of this paper is the study of Information Dissemination in Mobile Ad-hoc Networks by means of deterministic protocols. We characterize the connectivity resulting from the movement, from failures and from the fact that nodes may join the computation at different times with two values, � and �, so that, withintime slots, some node that has the information must be connected to some node without it for at leasttime slots. The protocols studied are clas- sified into three classes: oblivious (the transmission schedule of a node is only a function of its ID), quasi-oblivious (the transmission schedule may also depend on a global time), and adaptive. The main contribution of this work concerns negative results. Contrasting the lower and upper bounds derived, interesting complexity gaps among protocol- classes are observed. More precisely, in order to guarantee any progress towards solving the problem, it is shown thatmust be at least n 1 in general, but that � 2 (n 2 /log n) if an oblivious protocol is used. Since quasi-oblivious protocols can guarantee progress with � 2 O(n), this represents a significant gap, almost linear in �, between oblivious and quasi-oblivious protocols. Regarding the time to complete the dissemination, a lower bound of (n� + n 3 /log n) is proved for oblivious protocols, which is tight up to a polylogarithmic factor because a constructive O(n� + n 3 log n) upper bound exists for the same class. It is also proved that adaptive protocols require (n� + n 2 ), which is optimal given that a matching upper bound can be proved for quasi-oblivious protocols. These results show that the gap in time complexity between oblivious and quasi- oblivious, and hence adaptive, protocols is almost linear. This gap is what we call the profit of global synchrony, since it represents the gain the network obtains from global synchrony with respect to not having it.
Towards elastic SDR architectures using dynamic task management. SDR platforms integrating several types and numbers of processing elements in System-on-Chips become an attractive solution for baseband processing in wireless systems. In order to cope with the diversity of protocol applications and the heterogeneity of multi-core architectures, a hierarchical approach for workload distribution is proposed in this paper. Specifically, a system-level scheduler is employed to map applications to multiple processing clusters, complemented with a cluster-level scheduler - the CoreManager - for dynamic resource allocation and configuration as well as for task and data scheduling. A performance analysis of the proposed approach is presented, which shows the advantages of dynamic scheduling against a static approach for variable workloads in the LTE-Advanced uplink multi-user scenarios.
Multi-Channel Neural Recording Implants: A Review. The recently growing progress in neuroscience research and relevant achievements, as well as advancements in the fabrication process, have increased the demand for neural interfacing systems. Brain-machine interfaces (BMIs) have been revealed to be a promising method for the diagnosis and treatment of neurological disorders and the restoration of sensory and motor function. Neural recording implants, as a part of BMI, are capable of capturing brain signals, and amplifying, digitizing, and transferring them outside of the body with a transmitter. The main challenges of designing such implants are minimizing power consumption and the silicon area. In this paper, multi-channel neural recording implants are surveyed. After presenting various neural-signal features, we investigate main available neural recording circuit and system architectures. The fundamental blocks of available architectures, such as neural amplifiers, analog to digital converters (ADCs) and compression blocks, are explored. We cover the various topologies of neural amplifiers, provide a comparison, and probe their design challenges. To achieve a relatively high SNR at the output of the neural amplifier, noise reduction techniques are discussed. Also, to transfer neural signals outside of the body, they are digitized using data converters, then in most cases, the data compression is applied to mitigate power consumption. We present the various dedicated ADC structures, as well as an overview of main data compression methods.
1.2
0.2
0.2
0.2
0.2
0.04
0
0
0
0
0
0
0
0
Online Artifact Cancelation in Same-Electrode Neural Stimulation and Recording Using a Combined Hardware and Software Architecture. Advancing studies of neural network dynamics and developments of closed-loop neural interfaces requires the ability to simultaneously stimulate and record the neural cells. Recording adjacent to or at the stimulation site produces artifact signals that are orders of magnitude larger than the neural responses of interest. These signals often saturate the recording amplifier causing distortion or lo...
A Multichannel Integrated Circuit for Electrical Recording of Neural Activity, With Independent Channel Programmability Since a few decades, micro-fabricated neural probes are being used, together with microelectronic interfaces, to get more insight in the activity of neuronal networks. The need for higher temporal and spatial recording resolutions imposes new challenges on the design of integrated neural interfaces with respect to power consumption, data handling and versatility. In this paper, we present an integrated acquisition system for in vitro and in vivo recording of neural activity. The ASIC consists of 16 low-noise, fully-differential input channels with independent programmability of its amplification (from 100 to 6000 V/V) and filtering (1-6000 Hz range) capabilities. Each channel is AC-coupled and implements a fourth-order band-pass filter in order to steeply attenuate out-of-band noise and DC input offsets. The system achieves an input-referred noise density of 37 nV/√Hz, a NEF of 5.1, a CMRR >; 60 dB, a THD <; 1% and a sampling rate of 30 kS/s per channel, while consuming a maximum of 70 μA per channel from a single 3.3 V. The ASIC was implemented in a 0.35 μm CMOS technology and has a total area of 5.6 × 4.5 mm2. The recording system was successfully validated in in vitro and in vivo experiments, achieving simultaneous multichannel recordings of cell activity with satisfactory signal-to-noise ratios.
A 2<sup>nd</sup> order fully-passive noise-shaping SAR ADC with embedded passive gain An opamp-free solution to implement 2 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">nd</sup> order noise shaping in a successive approximation register analog-to-digital converter is presented. This 2 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">nd</sup> order fully-passive noise shaping, which has high power efficiency, is realized by charge-redistribution. A gain of 2 is required in this proposal which is realized by a passive method to save power. A prototype chip is fabricated in a 65-nm CMOS process occupying a core area of 0.0129 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> . An ENOB of 10.5-bit is achieved at 64-MHz sampling frequency based on an 8-bit CDAC architecture when the OSR is 4. It dissipates 252.9 ßW from a 1.0-V supply and achieves a Walden FoM of 10.9 fJ/conv.-step and a Schreier FoM of 169.9 dB.
A 13.5-ENOB, 107-μW Noise-Shaping SAR ADC With PVT-Robust Closed-Loop Dynamic Amplifier This article presents a second-order noise-shaping (NS) successive approximation register (SAR) analog-to-digital converter (ADC) with a process, voltage, and temperature (PVT)-robust closed-loop dynamic amplifier. The proposed closed-loop dynamic amplifier combines the merits of closed-loop architecture and dynamic operation, realizing robustness, high accuracy, and high energy-efficiency simultaneously. It is embedded in the loop filter of an NS SAR design, enabling the first fully dynamic NS-SAR ADC that realizes sharp noise transfer function (NTF) while not requiring any gain calibration. Fabricated in 40-nm CMOS technology, the prototype ADC achieves an SNDR of 83.8 dB over a bandwidth of 625 kHz while consuming only 107 μW. It results in an SNDR-based Schreier figure-of-merit (FoM) of 181.5 dB.
A 12.5-MHz Bandwidth 77-dB SNDR SAR-Assisted Noise Shaping Pipeline ADC This article presents a successive approximation register (SAR)-assisted noise-shaping (NS) pipeline architecture which breaks the speed bottleneck of the existing SAR or SAR-assisted-type NS analog-to-digital converters (ADCs). Rather than only for residue amplification and pipeline operation, the multiplying digital-to-analog converter (MDAC) is also reused as unity buffer and analog adder to realize the NS with error feedback (EF) structure in this design. While incorporating the proposed alternative loading capacitor (ALC) technique, an ideal first-order noise transfer function (NTF) is realized without additional feedback phase and only with a small analog circuit overhead. Unlike other NS SAR ADCs that involved amplification, the inter-stage gain attenuates the noise from the second-stage comparator, thus leading to both high speed and resolution. Fabricated in a 65-nm CMOS process, the prototype achieves a signal-to-noise-and-distortion ratio (SNDR) of 77.1 dB over 12.5-MHz bandwidth (BW) with only over-sampling ratio (OSR) of 8. Under a 1.2-V supply voltage, the ADC consumes 4.5 mW and exhibits a Scherier figure of merit (FoM) of 171.5 dB.
A Bidirectional Neural Interface IC With Chopper Stabilized BioADC Array and Charge Balanced Stimulator. We present a bidirectional neural interface with a 4-channel biopotential analog-to-digital converter (bioADC) and a 4-channel current-mode stimulator in 180 nm CMOS. The bioADC directly transduces microvolt biopotentials into a digital representation without a voltage-amplification stage. Each bioADC channel comprises a continuous-time first-order ΔΣ modulator with a chopper-stabilized OTA input ...
A 0.013 , 5 , DC-Coupled Neural Signal Acquisition IC With 0.5 V Supply Recent success in brain-machine interfaces has provided hope for patients with spinal-cord injuries, Parkinson's disease, and other debilitating neurological conditions, and has boosted interest in electronic recording of cortical signals. State-of-the-art recording solutions rely heavily on analog techniques at relatively high supply voltages to perform signal conditioning and filtering, leading to large silicon area and limited programmability. We present a neural interface in 65nm CMOS and operating at a 0.5V supply that obtains performance comparable or superior to state-of-the-art systems in a silicon area over 3x smaller. These results are achieved by using a scalable architecture that avoids on-chip passives and takes advantage of high-density logic. The use of 65nm CMOS eases integration with low-power digital systems, while the low supply voltage makes the design more compatible with wireless powering schemes.
A new approach to state observation of nonlinear systems with delayed output The article presents a new approach for the construction of a state observer for nonlinear systems when the output measurements are available for computations after a nonnegligible time delay. The proposed observer consists of a chain of observation algorithms reconstructing the system state at different delayed time instants (chain observer). Conditions are given for ensuring global exponential convergence to zero of the observation error for any given delay in the measurements. The implementation of the observer is simple and computer simulations demonstrate its effectiveness.
Distributed Subgradient Methods For Multi-Agent Optimization We study a distributed computation model for optimizing a sum of convex objective functions corresponding to multiple agents. For solving this (not necessarily smooth) optimization problem, we consider a subgradient method that is distributed among the agents. The method involves every agent minimizing his/her own objective function while exchanging information locally with other agents in the network over a time-varying topology. We provide convergence results and convergence rate estimates for the subgradient method. Our convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy.
Filtering by Aliasing In this manuscript we describe a fundamentally novel approach to the design of anti-aliasing filters. The approach, termed Filtering by Aliasing, incorporates the frequency-domain aliasing operation itself into the filtering task. The spectral content is spread with a periodic mixer and weighted with a simple analog filter before it aliases at the sampler. By designing the system according to the formulations presented in this manuscript, the sampled output will have been subjected to sharp, highly programmable anti-alias filtering. This manuscript describes the proposed Filtering by Aliasing idea, the effective programmable anti-aliasing filter, its design, and its range of frequency responses. The manuscript also addresses the implementation sensitivities of the proposed Filtering by Aliasing approach and provides a performance comparison against existing techniques in the context of reconfigurable anti-alias filtering.
The Quadrature LC Oscillator: A Complete Portrait Based on Injection Locking We show that the quadrature LC oscillator is best treated as two strongly coupled, nominally identical oscillators that are locked to the same frequency. Differential equations that extend Adler&#39;s description of locking to strong injection reveal the full dynamics of this circuit. With a simplifying insight, the analysis reveals all the modes of the oscillator, their stability, the effects of mism...
Permanent-magnets linear actuators applicability in automobile active suspensions Significant improvements in automobile suspension performance are achieved by active systems. However, current active suspension systems are too expensive and complex. Developments occurring in power electronics, permanent magnet materials, and microelectronic systems justifies analysis of the possibility of implementing electromagnetic actuators in order to improve the performance of automobile suspension systems without excessively increasing complexity and cost. In this paper, the layouts of hydraulic and electromagnetic active suspensions are compared. The actuator requirements are calculated, and some experimental results proving that electromagnetic suspension could become a reality in the future are shown.
Architectural Support for Dynamic Linking All software in use today relies on libraries, including standard libraries (e.g., C, C++) and application-specific libraries (e.g., libxml, libpng). Most libraries are loaded in memory and dynamically linked when programs are launched, resolving symbol addresses across the applications and libraries. Dynamic linking has many benefits: It allows code to be reused between applications, conserves memory (because only one copy of a library is kept in memory for all the applications that share it), and allows libraries to be patched and updated without modifying programs, among numerous other benefits. However, these benefits come at the cost of performance. For every call made to a function in a dynamically linked library, a trampoline is used to read the function address from a lookup table and branch to the function, incurring memory load and branch operations. Static linking avoids this performance penalty, but loses all the benefits of dynamic linking. Given its myriad benefits, dynamic linking is the predominant choice today, despite the performance cost. In this work, we propose a speculative hardware mechanism to optimize dynamic linking by avoiding executing the trampolines for library function calls, providing the benefits of dynamic linking with the performance of static linking. Speculatively skipping the memory load and branch operations of the library call trampolines improves performance by reducing the number of executed instructions and gains additional performance by reducing pressure on the instruction and data caches, TLBs, and branch predictors. Because the indirect targets of library call trampolines do not change during program execution, our speculative mechanism never misspeculates in practice. We evaluate our technique on real hardware with production software and observe up to 4% speedup using only 1.5KB of on-chip storage.
Neuropixels Data-Acquisition System: A Scalable Platform for Parallel Recording of 10,000+ Electrophysiological Signals. Although CMOS fabrication has enabled a quick evolution in the design of high-density neural probes and neural-recording chips, the scaling and miniaturization of the complete data-acquisition systems has happened at a slower pace. This is mainly due to the complexity and the many requirements that change depending on the specific experimental settings. In essence, the fundamental challenge of a n...
1.1
0.1
0.1
0.1
0.1
0.05
0.02
0
0
0
0
0
0
0
Design of high-speed wireline transceivers for backplane communications in 28nm CMOS This paper describes the design of the architecture and circuit blocks for backplane communication transceivers. A channel study investigates the major challenges in the design of high-speed reconfigurable transceivers. Architectural solutions resolving channel-induced signal distortions are proposed and their effectiveness on various channels is investigated. Subsequently, the paper describes the design of a 0.6-13.1Gb/s fully-adaptive backplane transceiver embedded in state-of-the-art low-leakage 28nm CMOS FPGAs. The receiver front-end utilizes a 3-stage CTLE, a 7-tap speculative DFE, and a 4-tap sliding DFE to remove the immediate post-cursor ISI up to 64 taps. The clocking network provides continuous operation range between 0.6-13.1Gb/s. The transceiver achieves BER <; 10-15 over a 31dB-loss backplane at 13.1Gb/s and over channels with 10GBASE-KR characteristics at 10.3125Gb/s.
Verifying global start-up for a Möbius ring-oscillator This paper presents the formal verification of start-up for a differential ring-oscillator circuit used in industrial designs. We present an efficient algorithm for finding DC equilibria to establish a condition that ensure the oscillator is free from lock-up. Further, we present a formal verification solution for the problem. Using dynamical systems theory, we show that any oscillator must have a non-empty set of states from which it fails to start properly. However, it is possible to show that these failures only occur with zero probability. To do so, this paper generalizes the "cone argument" initially presented in (Mitchell and Greenstreet, in Proceedings of the third workshop on designing correct circuits, 1996 ) and proves the soundness of this generalization. This paper also shows how concepts from analog design such as differential operation can be soundly incorporated into the verification to produce simpler models and reduce the complexity of the verification task.
A 28-Gb/s 4-Tap FFE/15-Tap DFE Serial Link Transceiver in 32-nm SOI CMOS Technology. This paper presents a 28-Gb/s transceiver in 32-nm SOI CMOS technology for chip-to-chip communications over high-loss electrical channels such as backplanes. The equalization needed for such applications is provided by a 4-tap baud-spaced feed-forward equalizer (FFE) in the transmitter and a two-stage peaking amplifier and 15-tap decision-feedback equalizer (DFE) in the receiver. The transmitter e...
21.1 A 1.7GHz MDLL-based fractional-N frequency synthesizer with 1.4ps RMS integrated jitter and 3mW power using a 1b TDC The introduction of inductorless frequency synthesizers into standardized wireless systems still requires a high level of innovation in order to achieve the stringent requirements of low noise and low power consumption. Synthesizers based on the so-called multiplying delay-locked loop (MDLL) represent one of the most promising architectures in this direction [1-3]. An MDLL resembles a ring oscillator, in which the signal edge traveling along the delay line is periodically refreshed by a clean edge of the reference clock. In this manner, the phase noise of the ring oscillator is filtered up to half the reference frequency and the total output jitter is reduced significantly. Unfortunately, the concept of MDLL, and in general of injection locking (IL), is inherently limited to integer-N synthesis, which makes it unacceptable in practical RF systems. A first extension of injection locking to coarse fractional-N resolution has been shown in [4], in which however the fractional resolution is bounded to the inverse of the number of ring-oscillator delay stages. This paper introduces a fractional-N MDLL-based frequency synthesizer with a 1b time/digital converter (TDC), which is able to outreach the performance of inductorless fractional-N synthesizers. The prototype synthesizes frequencies between 1.6 and 1.9GHz with 190Hz resolution and achieves RMS integrated jitter of 1.4ps at 3mW power consumption, even in the worst-case of near-integer channel.
Clocking Analysis, Implementation and Measurement Techniques for High-Speed Data Links—A Tutorial The performance of high-speed wireline data links depend crucially on the quality and precision of their clocking infrastructure. For future applications, such as microprocessor systems that require terabytes/s of aggregate bandwidth, signaling system designers will have to become even more aware of detailed clock design tradeoffs in order to jointly optimize I/O power, bandwidth, reliability, silicon area and testability. The goal of this tutorial is to assist I/O circuit and system designers in developing intuitive and practical understanding of I/O clocking tradeoffs at all levels of the link hierarchy from the circuit-level implementation to system-level architecture.
6.6 A 128Gb/s 1.3pJ/b PAM-4 Transmitter with Reconfigurable 3-Tap FFE in 14nm CMOS The ever-increasing demand for higher bandwidth continues to fuel the need for faster and more power-efficient IOs, with the next generation high-speed serial links expected to reach data rates higher than 112Gb/s using PAM-4 signaling [1–3]. While PAM-4 spectral efficiency is better than that of NRZ, it is less tolerant of residual ISI and noise. As a consequence, a driver with high bandwidth and large output amplitude is required. This paper presents a 64Gbaud PAM-4 TX with a fully reconfigurable 3-tap FFE, which achieves a power efficiency of 1.3pJ/b in PAM-4 mode and 2.7pJ/b in NRZ mode for a differential output swing of $1\mathrm{V}_{ppd}$. A feature of the FFE construction is the use of fully re-assignable FFE segments among the 3 taps, which allows a reduced number of segments for lower capacitance and higher driver bandwidth. To minimize power consumption, a quarter-rate clocking architecture is adopted with a tailless 4:1 multiplexer, which also acts as a pre-driver to a tailless CML output driver.
A Low Noise Sub-Sampling PLL in Which Divider Noise is Eliminated and PD/CP Noise is Not Multiplied by $N ^{2}$ This paper presents a 2.2-GHz low jitter sub-sampling based PLL. It uses a phase-detector/charge-pump (PD/CP) that sub-samples the VCO output with the reference clock. In contrast to what happens in a classical PLL, the PD/CP noise is not multiplied by N 2 in this sub-sampling PLL, resulting in a low noise contribution from the PD/CP. Moreover, no frequency divider is needed in the locked state an...
An MLSE Receiver for Electronic Dispersion Compensation of OC-192 Fiber Links A maximum-likelihood sequence estimation (MLSE) receiver is fabricated to combat dispersion/intersymbol interference (chromatic and polarization mode), noise (optical and electrical), and nonlinearities (e.g., fiber, receiver photodiode, or laser) in OC-192 metro and long-haul links. The MLSE receiver includes a variable gain amplifier with 40-dB gain range and 7.5-GHz 3-dB bandwidth, a 12.5-Gb/s ...
Differential Power Analysis . Cryptosystem designers frequently assume that secrets willbe manipulated in closed, reliable computing environments. Unfortunately,actual computers and microchips leak information about the operationsthey process. This paper examines specific methods for analyzingpower consumption measurements to find secret keys from tamperresistant devices. We also discuss approaches for building cryptosystemsthat can operate securely in existing hardware that leaks information.Keywords:...
Parsimonious flooding in dynamic graphs An edge-Markovian process with birth-rate p and death-rate q generates infinite sequences of graphs (G 0, G 1, G 2,…) with the same node set [n] such that G t is obtained from G t-1 as follows: if $${e\notin E(G_{t-1})}$$ then $${e\in E(G_{t})}$$ with probability p, and if $${e\in E(G_{t-1})}$$ then $${e\notin E(G_{t})}$$ with probability q. In this paper, we establish tight bounds on the complexity of flooding in edge-Markovian graphs, where flooding is the basic mechanism in which every node becoming aware of an information at step t forwards this information to all its neighbors at all forthcoming steps t′ > t. These bounds complete previous results obtained by Clementi et al. Moreover, we also show that flooding in dynamic graphs can be implemented in a parsimonious manner, so that to save bandwidth, yet preserving efficiency in term of simplicity and completion time. For a positive integer k, we say that the flooding protocol is k-active if each node forwards an information only during the k time steps immediately following the step at which the node receives that information for the first time. We define the reachability threshold for the flooding protocol as the smallest integer k such that, for any source $${s\in [n]}$$ , the k-active flooding protocol from s completes (i.e., reaches all nodes), and we establish tight bounds for this parameter. We show that, for a large spectrum of parameters p and q, the reachability threshold is by several orders of magnitude smaller than the flooding time. In particular, we show that it is even constant whenever the ratio p/(p + q) exceeds log n/n. Moreover, we also show that being active for a number of steps equal to the reachability threshold (up to a multiplicative constant) allows the flooding protocol to complete in optimal time, i.e., in asymptotically the same number of steps as when being perpetually active. These results demonstrate that flooding can be implemented in a practical and efficient manner in dynamic graphs. The main ingredient in the proofs of our results is a reduction lemma enabling to overcome the time dependencies in edge-Markovian dynamic graphs.
A 6.5 GHz wideband CMOS low noise amplifier for multi-band use LNA based on a noise-cancelled common gate topology spans 0.1 to 6.5 GHz with a gain of 19 dB, a NF of 3 dB, and s11 < -10 dB. It is realized in 0.13-mum CMOS and dissipates 12 mW
Design and Analysis of a Class-D Stage With Harmonic Suppression. This paper presents the design and analysis of a low-power Class-D stage in 90 nm CMOS featuring a harmonic suppression technique, which cancels the 3rd harmonic by shaping the output voltage waveform. Only digital circuits are used and the short-circuit current present in Class-D inverter-based output stages is eliminated, relaxing the buffer requirements. Using buffers with reduced drive strengt...
P2P-Based Service Distribution over Distributed Resources Dynamic or demand-driven service deployment in a Grid or Cloud environment is an important issue considering the varying nature of demand. Most distributed frameworks either offer static service deployment which results in resource allocation problems, or, are job-based where for each invocation, the job along with the data has to be transferred for remote execution resulting in increased communication cost. An alternative approach is dynamic demand-driven provisioning of services as proposed in earlier literature, but the proposed methods fail to account for the volatility of resources in a Grid environment. In this paper, we propose a unique peer-to-peer based approach for dynamic service provisioning which incorporates a Bit-Torrent like protocol for provisioning the service on a remote node. Being built around a P2P model, the proposed framework caters to resource volatility and also incurs lower provisioning cost.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.076926
0.069445
0.034728
0.020033
0.014667
0.007333
0.001412
0.00005
0
0
0
0
0
0
BHIDS: a new, cluster based algorithm for black hole IDS Mobile ad hoc networks (MANET) are highly vulnerable to attacks due to multiple factors including the open medium, dynamically changing network topology, cooperative algorithms, and lack of centralized monitoring. Besides slowing down the network, the traditional way of using firewalls and cryptography often fails to detect different types of attacks. The lack of infrastructures in MANETs makes the detection and control of security hazards all the more difficult. In this paper, we present a new, cluster based Black Hole Intrusion Detection System (BHIDS) for mobile, ad hoc networks. The features that are considered for BHIDS include mobility of nodes, variation in number of attacking nodes, packet delivery rate, density of the network, etc. The nodes in the network form a two-layered cluster. This helps splitting the processing and communication overhead between the cluster heads at Layer 1 and 2. The proposed IDS algorithm has been tested on a simulated network using the NS simulator. The performance graphs show marked improvement as far as packet dropping is concerned. Copyright (C) 2009 John Wiley & Sons, Ltd.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Sensor network gossiping or how to break the broadcast lower bound Gossiping is an important problem in Radio Networks that has been well studied, leading to many important results. Due to strong resouce limitations of sensor nodes, previous solutions are frequently not feasible in Sensor Networks. In this paper, we study the gossiping problem in the restrictive context of Sensor Networks. By exploiting the geometry of sensor node distributions, we present reduced, optimal running time of O(D + Δ) for an algorithm that completes gossiping with high probability in a Sensor Network of unknown topology and adversarial wake-up, where D is the diameter and Δ the maximum degree of the network. Given that an algorithm for gossiping also solves the broadcast problem, our result proves that the classic lower bound of [16] can be broken if nodes are allowed to do preprocessing.
A survey on routing protocols for wireless sensor networks Recent advances in wireless sensor networks have led to many new protocols specifically designed for sensor networks where energy awareness is an essential consideration. Most of the attention, however, has been given to the routing protocols since they might differ depending on the application and network architecture. This paper surveys recent routing protocols for sensor networks and presents a classification for the various approaches pursued. The three main categories explored in this paper are data-centric, hierarchical and location-based. Each routing protocol is described and discussed under the appropriate category. Moreover, protocols using contemporary methodologies such as network flow and quality of service modeling are also discussed. The paper concludes with open research issues.
Analysis of Distributed Random Grouping for Aggregate Computation on Wireless Sensor Networks with Randomly Changing Graphs Dynamical connection graph changes are inherent in networks such as peer-to-peer networks, wireless ad hoc networks, and wireless sensor networks. Considering the influence of the frequent graph changes is thus essential for precisely assessing the performance of applications and algorithms on such networks. In this paper, using stochastic hybrid systems (SHSs), we model the dynamics and analyze the performance of an epidemic-like algorithm, distributed random grouping (DRG), for average aggregate computation on a wireless sensor network with dynamical graph changes. Particularly, we derive the convergence criteria and the upper bounds on the running time of the DRG algorithm for a set of graphs that are individually disconnected but jointly connected in time. An effective technique for the computation of a key parameter in the derived bounds is also developed. Numerical results and an application extended from our analytical results to control the graph sequences are presented to exemplify our analysis.
Brief announcement: locality-based aggregate computation in wireless sensor networks We present DRR-gossip, an energy-efficient and robust aggregate computation algorithm in sensor networks. We prove that the DRR-gossip algorithm requires O(n) messages and O(n3/2/log1/2 n) one-hop wireless transmissions to obtain aggregates on a random geometric graph. This reduces the energy consumption by at least a factor of 1/log n over the standard uniform gossip algorithm. Experiments validate the theoretical results and show that DRR-gossip needs much less transmissions than other gossip-based schemes.
Randomized Communication in Radio Networks A communication network is called a radio network if its nodes mayexchange messages among themselves in a certain restricted way. First,a multicast performed by a node delivers copies of the same messageto all the reachable nodes. Secondly, a node can successfully receive anincoming message only if exactly one of its neighbors sent a message inthat step. It is this semantics of ports at nodes that defines the networksrather then the fact that radio waves are used as the mediumof...
An early-stopping protocol for computing aggregate functions in Sensor Networks In this paper, we study algebraic aggregate computations in Sensor Networks. The main contribution is the presentation of an early-stopping protocol that computes the average function under a harsh model of the conditions under which sensor nodes operate. This protocol is shown to be time-optimal in the presence of infrequent failures. The approach followed saves time and energy by the computation relying on a small network of delegate nodes that can be rebuilt fast in case of node failures and communicate using a collision-free schedule. Delegate nodes run two protocols simultaneously, namely, a collection/dissemination tree-based algorithm, which is shown to be optimal, and a mass-distribution algorithm. Both algorithms are analyzed under a model where the frequency of failures is a parameter. Other aggregate computation algorithms can be easily derived from this protocol. To the best of our knowledge, this is the first optimal early-stopping algorithm for aggregate computations in Sensor Networks.
Randomized algorithms This text by two well-known experts in the field presents the basic concepts in the design and analysis of randomized algorithms at a level accessible to beginning graduate students, professionals and researchers.
Ad-hoc On-Demand Distance Vector Routing This paper describes work carried out as part of the GUIDE project at Lancaster University. The overall aim of the project is to develop a context-sensitive tourist guide for visitors to the city of Lancaster. Visitors are equipped with portable GUIDE ...
Unifying stabilization and termination in message-passing systems The paper dispels the myth that it is impossible for a message-passing program to be both terminating and stabilizing. We consider a rather general notion of termination: a terminating program eventually stops its execution after the environment ceases lo provide input. We identify termination-symmetry to be a necessary condition for a problem to admit a sollution with such properties. Our results do confirm that a number of well-known problems (e.g., consensus, leader election) do not allow a terminating and stabilizing solution. On the flip side, they show that other problems such as mutual exclusion and reliable-transmission allow such solutions. We present a message-passing solution to the mutual exclusion problem that is both stabilizing and terminating. We also desctibe an approach of adding termination to a stabilizing program. To illustrate this approach, we add termination to a stabilizing solution for the reliable transmission problem.
Baring It All to Software: Raw Machines The most radical of the architectures that appear in this issue are Raw processors-highly parallel architectures with hundreds of very simple processors coupled to a small portion of the on-chip memory. Each processor, or tile, also contains a small bank of configurable logic, allowing synthesis of complex operations directly in configurable hardware. Unlike the others, this architecture does not use a traditional instruction set architecture. Instead, programs are compiled directly onto the Raw hardware, with all units told explicitly what to do by the compiler. The compiler even schedules most of the intertile communication. The real limitation to this architecture is the efficacy of the compiler. The authors demonstrate impressive speedups for simple algorithms that lend themselves well to this architectural model, but whether this architecture will be effective for future workloads is an open question.
Fully Monolithic Cellular Buck Converter Design for 3-D Power Delivery A fully monolithic interleaved buck dc-dc point-of-load (PoL) converter has been designed and fabricated in a 0.18-mm SiGe BiCMOS process. Target application of the design is 3-D power delivery for future microprocessors, in which the PoL converter will be vertically integrated with the processor using wafer-level 3-D interconnect technologies. Advantages of 3-D power delivery over conventional discrete voltage regulator modules (VRMs) are discussed. The prototype design, using two interleaved buck converter cells each operating at 200 MHz switching frequency and delivering 500 mA output current, is discussed with a focus on the converter power stage and control loop to highlight the tradeoffs unique to such high-frequency, monolithic designs. Measured steady-state and dynamic responses of the fabricated prototype are presented to demonstrate the ability of such monolithic converters to meet the power delivery requirements of future processors.
A dynamically reconfigurable system-on-a-chip architecture for future mobile digital signal processing The evolving of current and future broadband access techniques into the wireless domain introduces new and flexible network architectures with difficult and interesting challenges. The system designers are faced with a challenging set of problems that stem from access mechanisms, energy conservation, error rate, transmission speed characteristics of the wireless links and mobility aspects. This paper presents first the major challenges in realizing flexible microelectronic system solutions for digital baseband signal processing in future mobile communication applications. Based thereupon, the architecture design of flexible system-on-a-chip solutions is discussed. The focus of the paper is the introduction of a new parallel and dynamically reconfigurable hardware architecture tailored to this application area. Its performance issues and potential are discussed by the implementation of a flexible and computation-intensive component of future mobile terminals.
Kinesis: a security incident response and prevention system for wireless sensor networks This paper presents Kinesis, a security incident response and prevention system for wireless sensor networks, designed to keep the network functional despite anomalies or attacks and to recover from attacks without significant interruption. Due to the deployment of sensor networks in various critical infrastructures, the applications often impose stringent requirements on data reliability and service availability. Given the failure- and attack-prone nature of sensor networks, it is a pressing concern to enable the sensor networks provide continuous and unobtrusive services. Kinesis is quick and effective in response to incidents, distributed in nature, and dynamic in selecting response actions based on the context. It is lightweight in terms of response policy specification, and communication and energy overhead. A per-node single timer based distributed strategy to select the most effective response executor in a neighborhood makes the system simple and scalable, while achieving proper load distribution and redundant action optimization. We implement Kinesis in TinyOS and measure its performance for various application and network layer incidents. Extensive TOSSIM simulations and testbed experiments show that Kinesis successfully counteracts anomalies/attacks and behaves consistently under various attack scenarios and rates.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.082324
0.105074
0.07222
0.07222
0.055817
0.031022
0.011986
0.000508
0
0
0
0
0
0
The Impact of Data Aggregation in Wireless Sensor Networks Sensor networks are distributed event-based systems that differ from traditional communication networks in several ways: sensor networks have severe energy constraints, redundant low-rate data, and many-to-one flows. Data-centric mechanisms that perform in-network aggregation of data are needed in this setting for energy-efficient information flow. In this paper we model data-centric routing and compare its performance with traditional end-to-endrouting schemes. We examine the impact of source-destination placement and communication network density on the energy costs and delay associated with data aggregation. We show that data-centric routing offers significant performance gains across a wide range of operational scenarios. We also examine the complexity of optimal data aggregation, showing that although it is an NP-hard problem in general, there exist useful polynomial-time special cases.
A survey on routing protocols for wireless sensor networks Recent advances in wireless sensor networks have led to many new protocols specifically designed for sensor networks where energy awareness is an essential consideration. Most of the attention, however, has been given to the routing protocols since they might differ depending on the application and network architecture. This paper surveys recent routing protocols for sensor networks and presents a classification for the various approaches pursued. The three main categories explored in this paper are data-centric, hierarchical and location-based. Each routing protocol is described and discussed under the appropriate category. Moreover, protocols using contemporary methodologies such as network flow and quality of service modeling are also discussed. The paper concludes with open research issues.
Analysis of Distributed Random Grouping for Aggregate Computation on Wireless Sensor Networks with Randomly Changing Graphs Dynamical connection graph changes are inherent in networks such as peer-to-peer networks, wireless ad hoc networks, and wireless sensor networks. Considering the influence of the frequent graph changes is thus essential for precisely assessing the performance of applications and algorithms on such networks. In this paper, using stochastic hybrid systems (SHSs), we model the dynamics and analyze the performance of an epidemic-like algorithm, distributed random grouping (DRG), for average aggregate computation on a wireless sensor network with dynamical graph changes. Particularly, we derive the convergence criteria and the upper bounds on the running time of the DRG algorithm for a set of graphs that are individually disconnected but jointly connected in time. An effective technique for the computation of a key parameter in the derived bounds is also developed. Numerical results and an application extended from our analytical results to control the graph sequences are presented to exemplify our analysis.
Brief announcement: locality-based aggregate computation in wireless sensor networks We present DRR-gossip, an energy-efficient and robust aggregate computation algorithm in sensor networks. We prove that the DRR-gossip algorithm requires O(n) messages and O(n3/2/log1/2 n) one-hop wireless transmissions to obtain aggregates on a random geometric graph. This reduces the energy consumption by at least a factor of 1/log n over the standard uniform gossip algorithm. Experiments validate the theoretical results and show that DRR-gossip needs much less transmissions than other gossip-based schemes.
Initializing sensor networks of non-uniform density in the weak sensor model Assumptions about node density in the Sensor Networks literature are frequently too strong or too weak. Neither absolutely arbitrary nor uniform deployment seem feasible in most of the intended applications of sensor nodes. We present a Weak Sensor Model-compatible distributed protocol for hop-optimal network initialization, under the assumption that the maximum density of nodes is some value Δ known by all of the nodes. In order to prove lower bounds, we observe that all nodes must communicate with some other node in order to join the network, and we call the problem of achieving such a communication the Group Therapy Problem. We show lower bounds for the Group Therapy Problem in Radio Networks of maximum density Δ, regardless of the use of randomization, and a stronger lower bound for the important class of randomized fair protocols. We also show that even when nodes are distributed uniformly, the same lower bound holds, even in expectation and even for the simpler problem of Clear Transmission.
Geographic Gossip: Efficient Averaging for Sensor Networks Gossip algorithms for distributed computation are attract ive due to their simplicity, distributed nature, and robust ness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repea tedly recirculating redundant information. For realistic senso r network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing t imes of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of n and p n respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy ǫ using O( n 1.5 p log n log ǫ 1) radio transmissions, which yields a q n log n factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental
Next century challenges: scalable coordination in sensor networks
Extermal cover times for random walks on trees
Nebula: Distributed Edge Cloud for Data Intensive Computing Centralized cloud infrastructures have become the de-facto platform for data-intensive computing today. However, they suffer from inefficient data mobility due to the centralization of cloud resources, and hence, are highly unsuited for dispersed-data-intensive applications, where the data may be spread at multiple geographical locations. In this paper, we present Nebula: a dispersed cloud infrastructure that uses voluntary edge resources for both computation and data storage. We describe the lightweight Nebula architecture that enables distributed data-intensive computing through a number of optimizations including location-aware data and computation placement, replication, and recovery. We evaluate Nebula's performance on an emulated volunteer platform that spans over 50 PlanetLab nodes distributed across Europe, and show how a common data-intensive computing framework, MapReduce, can be easily deployed and run on Nebula. We show Nebula MapReduce is robust to a wide array of failures and substantially outperforms other wide-area versions based on a BOINC like model.
A Linear Representation of Dynamics of Boolean Networks A new matrix product, called semi-tensor product of matrices, is reviewed. Using it, a matrix expression of logic is proposed, where a logical variable is expressed as a vector, a logical function is expressed as a multiple linear mapping. Under this framework, a Boolean network equation is converted into an equivalent algebraic form as a conventional discrete-time linear system. Analyzing the transition matrix of the linear system, formulas are obtained to show a) the number of fixed points; b) the numbers of cycles of different lengths; c) transient period, for all points to enter the set of attractors; and d) basin of each attractor. The corresponding algorithms are developed and used to some examples.
Noise in current-commutating passive FET mixers Noise in the mixer of zero-IF receivers can compromise the overall receiver sensitivity. The evolution of a passive CMOS mixer based on the knowledge of the physical mechanisms of noise in an active mixer is explained. Qualitative physical models that simply explain the frequency translation of both the flicker and white noise of different FETs in the mixer have been developed. Derived equations have been verified by simulations, and mixer optimization has been explained.
Interactive presentation: An FPGA based all-digital transmitter with radio frequency output for software defined radio In this paper, we present the architecture and implementation of an all-digital transmitter with radio frequency output targeting an FPGA device. FPGA devices have been widely adopted in the applications of digital signal processing (DSP) and digital communication. They are typically well suited for the evolving technology of software defined radios (SDR) due to their reconfigurability and programmability. However, FPGA devices are mostly used to implement digital baseband and intermediate frequency (IF) functionalities. Therefore, significant analog and RF components are still needed to fulfill the radio communication requirements. The all-digital transmitter presented in this paper directly synthesizes RF signal in the digital domain, therefore eliminates the need for most of the analog and RF components. The all-digital transmitter consists of one QAM modulator and one RF pulse width modulator (RFPWM). The binary output waveform from RFPWM is centered at 800MHz with 64QAM signaling format. The entire transmitter is implemented using Xilinx Virtex2pro device with on chip multi-gigabit transceiver (MGT). The adjacent channel leakage ratio (ACLR) measured in the 20 MHz passband is 45dB, and the measured error vector magnitude (EVM) is less than 1%. Our work extends the digital implementation of communication applications on an FPGA platform to radio frequency, therefore making a significant evolution towards an ideal SDR.
A 15.5 dB, wide signal swing, dynamic amplifier using a common-mode voltage detection technique This paper presents a high-speed, low-power and wide signal swing differential dynamic amplifier using a common-mode voltage detection technique. The proposed dynamic amplifier achieves a 15.5 dB gain with less than 1 dB drop over a signal swing of 1.3 Vpp at an operating frequency of 1.5 GHz with a VDD of 1.2 V in 90 nm CMOS. The power consumption of the proposed circuit can be reduced linearly with operating frequency lowering.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.037251
0.056234
0.050193
0.050193
0.033604
0.022554
0.004514
0.000157
0.000002
0
0
0
0
0
A Dht-Based Discovery Service For The Internet Of Things Current trends towards the Future Internet are envisaging the conception of novel services endowed with context-aware and autonomic capabilities to improve end users' quality of life. The Internet of Things paradigm is expected to contribute towards this ambitious vision by proposing models and mechanisms enabling the creation of networks of "smart things" on a large scale. It is widely recognized that efficient mechanisms for discovering available resources and capabilities are required to realize such vision. The contribution of this work consists in a novel discovery service for the Internet of Things. The proposed solution adopts a peer-to-peer approach for guaranteeing scalability, robustness, and easy maintenance of the overall system. While most existing peer-to-peer discovery services proposed for the IoT support solely exact match queries on a single attribute (i. e., the object identifier), our solution can handle multiattribute and range queries. We defined a layered approach by distinguishing three main aspects: multiattribute indexing, range query support, peer-to-peer routing. We chose to adopt an over-DHT indexing scheme to guarantee ease of design and implementation principles. We report on the implementation of a Proof of Concept in a dangerous goods monitoring scenario, and, finally, we discuss test results for structural properties and query performance evaluation.
Kademlia: A Peer-to-Peer Information System Based on the XOR Metric We describe a peer-to-peer distributed hash table with provable consistency and performance in a fault-prone environment. Our system routes queries and locates nodes using a novel XOR-based metric topology that simplifies the algorithm and facilitates our proof. The topology has the property that every message exchanged conveys or reinforces useful contact information. The system exploits this information to send parallel, asynchronous query messages that tolerate node failures without imposing timeout delays on users.
Analysis of and proposal for a disaster information network from experience of the Great East Japan Earthquake Recently serious natural disasters such as earthquakes, tsunamis, typhoons, and hurricanes have occurred at many places around the world. The East Japan Great Earthquake on March 11, 2011 had more than 19,000 victims and destroyed a huge number of houses, buildings, loads, and seaports over the wide area of Northern Japan. Information networks and systems and electric power lines were also severely damaged by the great tsunami. Functions such as the highly developed information society, and residents' safety and trust were completely lost. Thus, through the lessons from this great earthquake, a more robust and resilient information network has become one of the significant subjects. In this article, our information network recovery activity in the aftermath of the East Japan Great Earthquake is described. Then the problems of current information network systems are analyzed to improve our disaster information network and system through our network recovery activity. Finally we suggest the systems and functions required for future large-scale disasters.
Communications and Control for Wireless Drone-Based Antenna Array In this paper, the effective use of multiple quadrotor drones as an aerial antenna array that provides wireless service to ground users is investigated. In particular, under the goal of minimizing the airborne service time needed for communicating with ground users, a novel framework for deploying and operating a drone-based antenna array system whose elements are single-antenna drones is proposed...
Joint Optimization of Task Scheduling and Image Placement in Fog Computing Supported Software-Defined Embedded System. Traditional standalone embedded system is limited in their functionality, flexibility, and scalability. Fog computing platform, characterized by pushing the cloud services to the network edge, is a promising solution to support and strengthen traditional embedded system. Resource management is always a critical issue to the system performance. In this paper, we consider a fog computing supported software-defined embedded system, where task images lay in the storage server while computations can be conducted on either embedded device or a computation server. It is significant to design an efficient task scheduling and resource management strategy with minimized task completion time for promoting the user experience. To this end, three issues are investigated in this paper: 1) how to balance the workload on a client device and computation servers, i.e., task scheduling, 2) how to place task images on storage servers, i.e., resource management, and 3) how to balance the I/O interrupt requests among the storage servers. They are jointly considered and formulated as a mixed-integer nonlinear programming problem. To deal with its high computation complexity, a computation-efficient solution is proposed based on our formulation and validated by extensive simulation based studies.
Edge-to-Edge Resource Discovery using Metadata Replication Edge computing has been recently introduced as an intermediary between Internet of Things (IoT) deployments and the cloud, providing data or control facilities to participating IoT devices. This includes actively supporting IoT resource discovery, something particularly pertinent when building large- scale, distributed and heterogeneous IoT systems. Moreover, edge devices supporting resource discovery are required to meet the stringent requirements prevalent in IoT systems including high availability, low-latency, and privacy. To this end, we present a resource discovery platform for IoT resources situated at the edge of the network. Our approach aims at providing a seamless discovery process that is able to (i) extend the covered area by deploying additional edge nodes and (ii) assist in the development of new IoT applications that target already available resources. Within our proposed platform, devices located in a certain proximity connect and form an edge-to-edge network that we call an edge neighborhood - our edge-to-edge metadata replication platform enables participating devices to discover available resources. Our solution is characterized by absence of centralization, as edge nodes exchange metadata about available resources within their scope in a peer-to-peer manner.
Threaded code The concept of “threaded code” is presented as an alternative to machine language code. Hardware and software realizations of it are given. In software it is realized as interpretive code not needing an interpreter. Extensions and optimizations are mentioned.
Leveraging on-chip voltage regulators as a countermeasure against side-channel attacks Side-channel attacks have become a significant threat to the integrated circuit security. Circuit level techniques are proposed in this paper as a countermeasure against side-channel attacks. A distributed on-chip power delivery system consisting of multi-level switched capacitor (SC) voltage converters is proposed where the individual interleaved stages are turned on and turned off either based on the workload information or pseudo-randomly to scramble the power consumption profile. In the case that the changes in the workload demand do not trigger the power delivery system to turn on or off individual stages, the active stages are reshuffled with so called converter-reshuffling to insert random spikes in the power consumption profile. An entropy based metric is developed to evaluate the security-performance of the proposed converter-reshuffling technique as compared to three other existing on-chip power delivery schemes. The increase in the power trace entropy with CoRe scheme is also demonstrated with simulation results to further verify the theoretical analysis.
The PARSEC benchmark suite: characterization and architectural implications This paper presents and characterizes the Princeton Application Repository for Shared-Memory Computers (PARSEC), a benchmark suite for studies of Chip-Multiprocessors (CMPs). Previous available benchmarks for multiprocessors have focused on high-performance computing applications and used a limited number of synchronization methods. PARSEC includes emerging applications in recognition, mining and synthesis (RMS) as well as systems applications which mimic large-scale multithreaded commercial programs. Our characterization shows that the benchmark suite covers a wide spectrum of working sets, locality, data sharing, synchronization and off-chip traffic. The benchmark suite has been made available to the public.
Side-Channel Leaks in Web Applications: A Reality Today, a Challenge Tomorrow With software-as-a-service becoming mainstream, more and more applications are delivered to the client through the Web. Unlike a desktop application, a web application is split into browser-side and server-side components. A subset of the application’s internal information flows are inevitably exposed on the network. We show that despite encryption, such a side-channel information leak is a realistic and serious threat to user privacy. Specifically, we found that surprisingly detailed sensitive information is being leaked out from a number of high-profile, top-of-the-line web applications in healthcare, taxation, investment and web search: an eavesdropper can infer the illnesses/medications/surgeries of the user, her family income and investment secrets, despite HTTPS protection; a stranger on the street can glean enterprise employees' web search queries, despite WPA/WPA2 Wi-Fi encryption. More importantly, the root causes of the problem are some fundamental characteristics of web applications: stateful communication, low entropy input for better interaction, and significant traffic distinctions. As a result, the scope of the problem seems industry-wide. We further present a concrete analysis to demonstrate the challenges of mitigating such a threat, which points to the necessity of a disciplined engineering practice for side-channel mitigations in future web application developments.
Wideband Balun-LNA With Simultaneous Output Balancing, Noise-Canceling and Distortion-Canceling An inductorless low-noise amplifier (LNA) with active balun is proposed for multi-standard radio applications between 100 MHz and 6 GHz. It exploits a combination of a common-gate (CGH) stage and an admittance-scaled common-source (CS) stage with replica biasing to maximize balanced operation, while simultaneously canceling the noise and distortion of the CG-stage. In this way, a noise figure (NF) close to or below 3 dB can be achieved, while good linearity is possible when the CS-stage is carefully optimized. We show that a CS-stage with deep submicron transistors can have high IIP2, because the nugsldr nuds cross-term in a two-dimensional Taylor approximation of the IDS(VGS, VDS) characteristic can cancel the traditionally dominant square-law term in the IDS(VGS) relation at practical gain values. Using standard 65 nm transistors at 1.2 V supply voltage, we realize a balun-LNA with 15 dB gain, NF < 3.5 dB and IIP2 > +20 dBm, while simultaneously achieving an IIP3 > 0 dBm. The best performance of the balun is achieved between 300 MHz to 3.5 GHz with gain and phase errors below 0.3 dB and plusmn2 degrees. The total power consumption is 21 mW, while the active area is only 0.01 mm2.
Cross-layer sensors for green cognitive radio. Green cognitive radio is a cognitive radio (CR) that is aware of sustainable development issues and deals with an additional constraint as regards the decision-making function of the cognitive cycle. In this paper, it is explained how the sensors distributed throughout the different layers of our CR model could help on taking the best decision in order to best contribute to sustainable development.
27.9 A 200kS/s 13.5b integrated-fluxgate differential-magnetic-to-digital converter with an oversampling compensation loop for contactless current sensing High voltage applications such as electric motor controllers, solar panel power inverters, electric vehicle battery chargers, uninterrupted and switching mode power supplies benefit from the galvanic isolation of contactless current sensors (CCS) [1]. These include magnetic sensors that sense the magnetic field emanating from a current-carrying conductor. The offset and resolution of Hall-effect sensors is in the μT-level [1-3], in contrast to the μT-level accuracy of integrated-fluxgate (IFG) magnetometers [4]. Previously reported sampled-data closed-loop IFG readouts have limited BWs as their sampling frequencies (4) are limited to be less than or equal to the IFG excitation frequency, fEXC [5-7]. This paper describes a differential closed-loop IFG CCS with fs>fEXC. The differential architecture rejects magnetic stray fields and achieves 750x larger BW than the prior closed-loop IFG readouts [6-7] with 10×better offset than the Hall-effect sensors [1-3].
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.24
0.24
0.24
0.24
0.12
0.12
0
0
0
0
0
0
0
0
Analysis and Design of Voltage-Controlled Oscillator Based Analog-to-Digital Converter A voltage-controlled oscillator (VCO) based analog-to-digital converter (ADC) is a time-based architecture with a first-order noise-shaping property, which can be implemented using a VCO and digital circuits. This paper analyzes the performance of VCO-based ADCs in the presence of nonidealities such as jitter, nonlinearity, mismatch, and the metastability of D flip-flops. Based on this analysis, design criteria for determining parameters for VCO-based ADCs are described. In addition, a digital calibration technique to enhance the spurious-free dynamic range degraded by the nonlinearity is also introduced. To verify the theoretical analysis, a prototype chip is implemented in a 0.13-??m CMOS process. With a 500-MHz sampling frequency, the prototype achieves a signal-to-noise ratio ranging from 71.8 to 21.3 dB for an input bandwidth of 100 kHz-247 MHz, while dissipating 12.6 mW and occupying an area of 0.078 mm2.
Signal Folding in A/D Converters Signal folding appears in A/D converters (ADCs) in various ways. In this paper, the evolution of this technique is derived from the fundamentals of quantization to obtain systematic insights. We look upon folding as an automatic multiplexing of zero crossings, which simplifies hardware while preserving the high speed and low latency of a flash ADC. By appreciating similarities between the well-kno...
A 45 nm Resilient Microprocessor Core for Dynamic Variation Tolerance A 45 nm microprocessor core integrates resilient error-detection and recovery circuits to mitigate the clock frequency (FCLK) guardbands for dynamic parameter variations to improve throughput and energy efficiency. The core supports two distinct error-detection designs, allowing a direct comparison of the relative trade-offs. The first design embeds error-detection sequential (EDS) circuits in critical paths to detect late timing transitions. In addition to reducing the Fclk guardbands for dynamic variations, the embedded EDS design can exploit path-activation rates to operate the microprocessor faster than infrequently-activated critical paths. The second error-detection design offers a less-intrusive approach for dynamic timing-error detection by placing a tunable replica circuit (TRC) per pipeline stage to monitor worst-case delays. Although the TRCs require a delay guardband to ensure the TRC delay is always slower than critical-path delays, the TRC design captures most of the benefits from the embedded EDS design with less implementation overhead. Furthermore, while core min-delay constraints limit the potential benefits of the embedded EDS design, a salient advantage of the TRC design is the ability to detect a wider range of dynamic delay variation, as demonstrated through low supply voltage (VCC) measurements. Both error-detection designs interface with error-recovery techniques, enabling the detection and correction of timing errors from fast-changing variations such as high-frequency VCC droops. The microprocessor core also supports two separate error-recovery techniques to guarantee correct execution even if dynamic variations persist. The first technique requires clock control to replay errant instructions at 1/2FCLK. In comparison, the second technique is a new multiple-issue instruction replay design that corrects errant instructions with a lower performance penalty and without requiring clock control. Silico- - n measurements demonstrate that resilient circuits enable a 41% throughput gain at equal energy or a 22% energy reduction at equal throughput, as compared to a conventional design when executing a benchmark program with a 10% VCC droop. In addition, the microprocessor includes a new adaptive clock control circuit that interfaces with the resilient circuits and a phase-locked loop (PLL) to track recovery cycles and adapt to persistent errors by dynamically changing Fclk f°Γ maximum efficiency.
A Mostly Digital VCO-Based CT-SDM With Third-Order Noise Shaping. This paper presents the architectural concept and implementation of a mostly digital voltage-controlled oscillator-analog-to-digital converter (VCO-ADC) with third-order quantization noise shaping. The system is based on the combination of a VCO and a digital counter. It is shown how this combination can function as a continuous-time integrator to form a high-order continuous-time sigma-delta modu...
A 0.5-V 1.6-mW 2.4-GHz Fractional-N All-Digital PLL for Bluetooth LE With PVT-Insensitive TDC Using Switched-Capacitor Doubler in 28-nm CMOS. This paper proposes an ultra-low-voltage (ULV) fractional-N all-digital PLL (ADPLL) powered from a single 0.5-V supply. While its digitally controlled oscillator (DCO) runs directly at 0.5 V, an internal switched-capacitor dc-dc converter “doubles” the supply voltage to all the digital circuitry and particularly regulates the time-to-digital converter (TDC) supply to stabilize its resolution, thus...
An Ultra-Low Voltage Level Shifter Using Revised Wilson Current Mirror for Fast and Energy-Efficient Wide-Range Voltage Conversion from Sub-Threshold to I/O Voltage This paper presents a novel ultra-low voltage level shifter for fast and energy-efficient wide-range voltage conversion from sub-threshold to I/O voltage. By addressing the voltage drop and non-optimal feedback control in a state-of-the-art level shifter based on Wilson current mirror, the proposed level shifter with revised Wilson current mirror significantly improves the delay and power consumption while achieving a wide voltage conversion range. It also employs mixed-Vt device and device sizing aware of inverse narrow width effect to further improve the delay and power consumption. Measurement results at 0.18 μm show that compared with the Wilson current mirror based level shifter, the proposed level shifter improves the delay, switching energy and leakage power by up to 3×, 19×, 29× respectively, when converting 0.3 V to a voltage between 0.6 V and 3.3 V. More specifically, it achieves 1.03 (or 1.15) FO4 delay, 39 (or 954) fJ/transition and 160 (or 970) pW leakage power, when converting 0.3 V to 1.8 V (or 3.3 V), which is better than several state-of-the-art level shifters for similar range voltage conversion. The measurement results also show that the proposed level shifter has good delay scalability with supply voltage scaling and low sensitivity to process and temperature variations.
Analog Filter Design Using Ring Oscillator Integrators Integrators are key building blocks in many analog signal processing circuits and systems. The DC gain of conventional opamp-RC or Gm- C integrators is severely limited by the gain of operational transconductance amplifier (OTA) used to implement them. Process scaling reduces transistor output resistance, which further exacerbates this issue. We propose applying ring oscillator integrators (ROIs) in the design of high order analog filters. ROIs implemented with simple CMOS inverters achieve infinite DC gain at low supply voltages independent of transistor non-idealities and imperfections such as finite output impedance. Consequently, ROIs scale more effectively into newer processes. A prototype fourth order filter designed using the ROIs was fabricated in 90 nm CMOS and occupies an area of 0.29 mm2. Operating with a 0.55 V supply, the filter consumes 2.9 mW power and achieves bandwidth of 7 MHz, SNR of 61.4 dB, SFDR of 67.6 dB and THD of 60.1 dB. The measured IM3 obtained by feeding two tones at 1 MHz and 2 MHz is 63.4 dB.
22.6 A 2.2GS/s 7b 27.4mW time-based folding-flash ADC with resistively averaged voltage-to-time amplifiers High-speed low-resolution ADCs are widely used for various applications, such as 60GHz receivers, serial links, and high-density disk drive systems. Flash architectures have the highest conversion rate without employing time interleaving. Moreover, flash architectures have the lowest latency, which is often required in feedback-loop systems. However, the area and power consumption are exponentially increased by increasing the resolution since the number of comparators must be 2N. A folding architecture is a well-known technique to reduce the number of comparators in an ADC while maintaining high sampling rate and low latency [1,2]. Folding architectures were previously realized by generating a number of zero crossings with folding amplifiers. However, the conventional folding amplifiers consume a large amount of power to realize a fast response. In contrast, a folding ADC with only dynamic power consumption and without using amplifiers is reported in [3]. However, only a folding factor of 2 is realized, and therefore the number of comparators is reduced by half.
A 90-MS/s 11-MHz-Bandwidth 62-dB SNDR Noise-Shaping SAR ADC Although charge-redistribution successive approximation (SAR) ADCs are highly efficient, comparator noise and other effects limit the most efficient operation to below 10-b ENOB. This work introduces an oversampling, noise-shaping SAR ADC architecture that achieves 10-b ENOB with an 8-b SAR DAC array. A noise-shaping scheme shapes both comparator noise and quantization noise, thereby decoupling comparator noise from ADC performance. The loop filter is comprised of a cascade of a two-tap charge-domain FIR filter and an integrator to achieve good noise shaping even with a low-quality integrator. The prototype ADC is fabricated in 65-nm CMOS and occupies a core area of 0.03 mm2. Operating at 90 MS/s, it consumes 806 μW from a 1.2-V supply.
Distributed average consensus with least-mean-square deviation We consider a stochastic model for distributed average consensus, which arises in applications such as load balancing for parallel processors, distributed coordination of mobile autonomous agents, and network synchronization. In this model, each node updates its local variable with a weighted average of its neighbors' values, and each new value is corrupted by an additive noise with zero mean. The quality of consensus can be measured by the total mean-square deviation of the individual variables from their average, which converges to a steady-state value. We consider the problem of finding the (symmetric) edge weights that result in the least mean-square deviation in steady state. We show that this problem can be cast as a convex optimization problem, so the global solution can be found efficiently. We describe some computational methods for solving this problem, and compare the weights and the mean-square deviations obtained by this method and several other weight design methods.
An 8-bit 100-mhz cmos linear interpolation dac An 8-bit 100-MHz CMOS linear interpolation digital-to-analog converter (DAC) is presented. It applies a time-interleaved structure on an 8-bit binary-weighted DAC, using 16 evenly skewed clocks generated by a voltage-controlled delay line to realize the linear interpolation function. The linear interpolation increases the attenuation of the DAC&#39;s image components. The requirement for the analog re...
Secure random number generation in wireless sensor networks Reliable random number generation is crucial for many available security algorithms, and some of the methods presented in literature proposed to generate them based on measurements collected from the physical environment, in order to ensure true randomness. However the effectiveness of such methods can be compromised if an attacker is able to gain access to the measurements thus inferring the generated random number. In our paper, we present an algorithm that guarantees security for the generation process, in a real world scenario using wireless sensor nodes as the sources of the physical measurements. The proposed method uses distributed leader election for selecting a random source of data. We prove the robustness of the algorithm by discussing common security attacks, and we present theoretical and experimental evaluation regarding its complexity in terms of time and exchanged messages.
An Ultra-Low Power Fully Integrated Energy Harvester Based on Self-Oscillating Switched-Capacitor Voltage Doubler This paper presents a fully integrated energy harvester that maintains >35% end-to-end efficiency when harvesting from a 0.84 mm 2 solar cell in low light condition of 260 lux, converting 7 nW input power from 250 mV to 4 V. Newly proposed self-oscillating switched-capacitor (SC) DC-DC voltage doublers are cascaded to form a complete harvester, with configurable overall conversion ratio from 9× to 23×. In each voltage doubler, the oscillator is completely internalized within the SC network, eliminating clock generation and level shifting power overheads. A single doubler has >70% measured efficiency across 1 nA to 0.35 mA output current ( >10 5 range) with low idle power consumption of 170 pW. In the harvester, each doubler has independent frequency modulation to maintain its optimum conversion efficiency, enabling optimization of harvester overall conversion efficiency. A leakage-based delay element provides energy-efficient frequency control over a wide range, enabling low idle power consumption and a wide load range with optimum conversion efficiency. The harvester delivers 5 nW-5 μW output power with >40% efficiency and has an idle power consumption 3 nW, in test chip fabricated in 0.18 μm CMOS technology.
Robust Biopotential Acquisition via a Distributed Multi-Channel FM-ADC. This contribution presents an active electrode system for biopotential acquisition using a distributed multi-channel FM-modulated analog front-end and ADC architecture. Each electrode captures one biopotential signal and converts to a frequency modulated signal using a VCO tuned to a unique frequency. Each electrode then buffers its output onto a shared analog line that aggregates all of the FM-mo...
1.025893
0.028148
0.028148
0.028148
0.028148
0.022222
0.012511
0.001307
0.000002
0
0
0
0
0
A Deep-Subthreshold Variation-Aware 0.2-V Open-Loop VCO-Based ADC This article demonstrates the potential of deep-subthreshold mixed-signal circuits in delivering medium-to-high performance to supply-constrained, energy-harvesting Internet of Things (IoT) sensing applications. This effort encapsulates the design and implementation of an ultra-low-voltage (ULV) 0.2-V open-loop VCO-based analog-to-digital converter (ADC). A replica VCO facilitates variation-aware VCO analog linearization. Analog phase-domain signal processing (APSP) techniques for beat-frequency extraction, phase-interpolation, and phase-folding relax constraints on both voltage-to-frequency analog circuitry and frequency-to-digital synchronous digital hardware. High-speed multi-phase frequency-to-digital converters (FDCs) and multi-rate digital back-end enable a sampling speed of 35 MS/s. The ADC prototype is implemented in 28-nm CMOS and achieves a peak SNDR of 64.4/59.9 dB, equivalent to an ENOB of 10.4/9.7 over 80-/160-kHz bandwidth (BW). The ADC core occupies an active area of 0.12 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> and consumes 15.9 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text {W}$ </tex-math></inline-formula> , resulting in a Walden and Schreier FoM of, respectively, 73.3/61.5 fJ/c-s and 161.4/159.9 dB at the corresponding BW configurations. Measurements across multiple ICs and supply voltages consolidate the value of variation-aware deep-subthreshold open-loop ADCs.
An Ultra-Low Voltage Level Shifter Using Revised Wilson Current Mirror for Fast and Energy-Efficient Wide-Range Voltage Conversion from Sub-Threshold to I/O Voltage This paper presents a novel ultra-low voltage level shifter for fast and energy-efficient wide-range voltage conversion from sub-threshold to I/O voltage. By addressing the voltage drop and non-optimal feedback control in a state-of-the-art level shifter based on Wilson current mirror, the proposed level shifter with revised Wilson current mirror significantly improves the delay and power consumption while achieving a wide voltage conversion range. It also employs mixed-Vt device and device sizing aware of inverse narrow width effect to further improve the delay and power consumption. Measurement results at 0.18 μm show that compared with the Wilson current mirror based level shifter, the proposed level shifter improves the delay, switching energy and leakage power by up to 3×, 19×, 29× respectively, when converting 0.3 V to a voltage between 0.6 V and 3.3 V. More specifically, it achieves 1.03 (or 1.15) FO4 delay, 39 (or 954) fJ/transition and 160 (or 970) pW leakage power, when converting 0.3 V to 1.8 V (or 3.3 V), which is better than several state-of-the-art level shifters for similar range voltage conversion. The measurement results also show that the proposed level shifter has good delay scalability with supply voltage scaling and low sensitivity to process and temperature variations.
Time Domain Processing Techniques Using Ring Oscillator-Based Filter Structures. The ability to process time-encoded signals with high fidelity is becoming increasingly important for the time domain (TD) circuit techniques that are used at the advanced nanometer technology nodes. This paper proposes a compact oscillator-based subsystem that performs precise filtering of asynchronous pulse-width modulation encoded signals and makes extensive use of digital logic, enabling low-v...
A Hybrid 1<sup>st</sup>/2<sup>nd</sup>-Order VCO-Based CTDSM With Rail-to-Rail Artifact Tolerance for Bidirectional Neural Interface Bi-directional brain machine interfaces enable simultaneous brain activity monitoring and neural modulation. However stimulation artifact can saturate instrumentation front-end, while concurrent on-site recording is needed. This brief presents a voltage controlled-oscillator (VCO) based continuous-time <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\rm \Delta \Sigma $ </tex-math></inline-formula> modulator (CTDSM) with rail-to-rail input range and fast artifact tracking. A hybrid <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}/2^{nd}$ </tex-math></inline-formula> -order loop is designed to achieve high dynamic range (DR) and large input range. Stimulation artifact is detected by a phase counter and compensated by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}$ </tex-math></inline-formula> -order loop. The residue signal is digitized by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2^{nd}$ </tex-math></inline-formula> -order loop for high precision. Redundancy between the two loops is implemented as feedback capacitors elements with non-binary ratio to guarantee feedback stability and linearity. Fabricated in a 55-nm CMOS process, the prototype achieves 65.7dB SNDR across 10 kHz bandwidth with a full scale of 200 mV <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">pp</sub> . And a ±1.2 V input range is achieved to suppress artifacts. Saline based experiment with simultaneous stimulation and recording demonstrates that the implemented system can track and tolerate rail-to-rail stimulation artifact within 30 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{s}$ </tex-math></inline-formula> while small neural signal can be continuously monitored.
A Fully Integrated 16-Channel Closed-Loop Neural-Prosthetic CMOS SoC With Wireless Power and Bidirectional Data Telemetry for Real-Time Efficient Human Epileptic Seizure Control. A 16-channel closed-loop neuromodulation system-on-chip (SoC) for human epileptic seizure control is proposed and designed. In the proposed SoC, a 16-channel neural-signal acquisition unit (NSAU), a biosignal processor (BSP), a 16-channel high-voltage-tolerant stimulator (HVTS), and wireless power and bidirectional data telemetry are designed. In the NSAU, the input protection circuit is used to p...
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Directed diffusion: a scalable and robust communication paradigm for sensor networks Advances in processor, memory and radio technology will enable small and cheap nodes capable of sensing, communication and computation. Networks of such nodes can coordinate to perform distributed sensing of environmental phenomena. In this paper, we explore the directed diffusion paradigm for such coordination. Directed diffusion is datacentric in that all communication is for named data. All nodes in a directed diffusion-based network are application-aware. This enables diffusion to achieve energy savings by selecting empirically good paths and by caching and processing data in-network. We explore and evaluate the use of directed diffusion for a simple remote-surveillance sensor network.
ImageNet Large Scale Visual Recognition Challenge. The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.
DieHard: probabilistic memory safety for unsafe languages Applications written in unsafe languages like C and C++ are vulnerable to memory errors such as buffer overflows, dangling pointers, and reads of uninitialized data. Such errors can lead to program crashes, security vulnerabilities, and unpredictable behavior. We present DieHard, a runtime system that tolerates these errors while probabilistically maintaining soundness. DieHard uses randomization and replication to achieve probabilistic memory safety by approximating an infinite-sized heap. DieHard's memory manager randomizes the location of objects in a heap that is at least twice as large as required. This algorithm prevents heap corruption and provides a probabilistic guarantee of avoiding memory errors. For additional safety, DieHard can operate in a replicated mode where multiple replicas of the same application are run simultaneously. By initializing each replica with a different random seed and requiring agreement on output, the replicated version of Die-Hard increases the likelihood of correct execution because errors are unlikely to have the same effect across all replicas. We present analytical and experimental results that show DieHard's resilience to a wide range of memory errors, including a heap-based buffer overflow in an actual application.
A Clustering Scheme For Hierarchical Control In Mufti-Hop Wireless Networks In this paper we present a clustering scheme to create a hierarchical control structure for mufti-hop wireless networks. A cluster is defined as a subset of vertices, whose induced graph is connected. In addition, a cluster is required to obey certain constraints that are useful for management and scalability of the hierarchy. All these constraints cannot be met simultaneously for general graphs, but we show how such a clustering can be obtained for wireless network topologies. Finally, we present an efficient distributed implementation of our clustering algorithm for a set of wireless nodes to create the set of desired clusters.
A survey of state and disturbance observers for practitioners This paper gives a unified and historical review of observer design for the benefit of practitioners. It is unified in the sense that all observers are examined in terms of: 1) the assumed dynamic structure of the plant; 2) the required information, including the input signals and modeling information of the plant; and 3) the implementation equation of the observer. This allows a practitioner, with a particular observer design problem in mind, to quickly find a suitable solution. The review is historical in the sense that it follows the evolution of ideas in observer design in the last half century. From the distinction in problem formulation, required modeling information and the observer design goal, we can see two schools of thought: one is developed in the framework of modern control theory; the other is based on disturbance estimation, which has been, to some extent, overlooked
Highly sensitive Hall magnetic sensor microsystem in CMOS technology A highly sensitive magnetic sensor microsystem based on a Hall device is presented. This microsystem consists of a Hall device improved by an integrated magnetic concentrator and new circuit architecture for the signal processing. It provides an amplification of the sensor signal with a resolution better than 30 /spl mu/V and a periodic offset cancellation while the output of the microsystem is av...
16.7 A 20V 8.4W 20MHz four-phase GaN DC-DC converter with fully on-chip dual-SR bootstrapped GaN FET driver achieving 4ns constant propagation delay and 1ns switching rise time Recently, the demand for miniaturized and fast transient response power delivery systems has been growing in high-voltage industrial electronics applications. Gallium Nitride (GaN) FETs showing a superior figure of merit (Rds, ON X Qg) in comparison with silicon FETs [1] can enable both high-frequency and high-efficiency operation in these applications, thus making power converters smaller, faster and more efficient. However, the lack of GaN-compatible high-speed gate drivers is a major impediment to fully take advantage of GaN FET-based power converters. Conventional high-voltage gate drivers usually exhibit propagation delay, tdelay, of up to several 10s of ns in the level shifter (LS), which becomes a critical problem as the switching frequency, fsw, reaches the 10MHz regime. Moreover, the switching slew rate (SR) of driving GaN FETs needs particular care in order to maintain efficient and reliable operation. Driving power GaN FETs with a fast SR results in large switching voltage spikes, risking breakdown of low-Vgs GaN devices, while slow SR leads to long switching rise time, tR, which degrades efficiency and limits fsw. In [2], large tdelay and long tR in the GaN FET driver limit its fsw to 1MHz. A design reported in [3] improves tR to 1.2ns, thereby enabling fsw up to 10MHz. However, the unregulated switching dead time, tDT, then becomes a major limitation to further reduction of tde!ay. This results in limited fsw and narrower range of VIN-VO conversion ratio. Interleaved multiphase topologies can be the most effective way to increase system fsw. However, each extra phase requires a capacitor for bootstrapped (BST) gate driving which incurs additional cost and complexity of the PCB design. Moreover, the requirements of fsw synchronization and balanced - urrent sharing for high fsw operation in multiphase implementation are challenging.
A VCO-Based Nonuniform Sampling ADC Using a Slope-Dependent Pulse Generator This paper presents a voltage-controlled oscillator (VCO)-based nonuniform sampling analog-to-digital converter (ADC) as an alternative to the level-crossing (LC)-based converters for digitizing biopotential signals. This work aims to provide a good signal-to-noise-and-distortion ratio at a low average sampling rate. In the proposed conversion method, a slope-dependent pulse generation block is used to provide a variable sample rate adjusted according to the input signal's slope. Simulation results show that the introduced method meets a target reconstruction quality with a sampling rate approaching 92 Sps, while on the same MIT-BIH Arrhythmia N 106 ECG benchmark, the classic LC-based approach requires a sampling rate higher than 500 Sps. The benefits of the proposed method are more remarkable when the input signal is very noisy. The proposed ADC achieves a compression ratio close to 4, but with only 5.4% root-mean-square difference when tested using the MIT-BIH Arrhythmia Database.
1.2
0.2
0.2
0.1
0.05
0
0
0
0
0
0
0
0
0
A Highly Integrated Tri-Path Hybrid Buck Converter With Reduced Inductor Current and Self-Balanced Flying Capacitor Voltage This paper presents a single-stage tri-path buck converter with reduced inductor current and self-balanced flying capacitor voltage. The proposed converter introduces capacitor paths to reduce the average inductor current and inductor current ripple, hence decreasing the conduction loss. Operating with two states per conversion cycle, it exhibits a relatively low voltage conversion ratio of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${D}$ </tex-math></inline-formula> /(1 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$+\,\,2{D}$ </tex-math></inline-formula> ). Besides, it realizes a self-balanced flying capacitor voltage in the charge redistribution phase. Similar to the conventional buck converter, the proposed converter in continuous-current mode only has two complex poles. Therefore, we design a Type-III compensator to obtain a good transient response. Moreover, the circuit does not require extra supplies for gate drivers due to the reutilization of the flying capacitor voltages, eliminating additional circuit and power overheads. The proposed converter, validated in a 65-nm standard CMOS technology, regulates a 0.7 V – 1 V output voltage from a 3.3 V – 4 V input voltage, delivering a maximum output power of 270 mW. The peak efficiency is 84%, with a switching frequency up to 5 MHz.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
RockSalt: better, faster, stronger SFI for the x86 Software-based fault isolation (SFI), as used in Google's Native Client (NaCl), relies upon a conceptually simple machine-code analysis to enforce a security policy. But for complicated architectures such as the x86, it is all too easy to get the details of the analysis wrong. We have built a new checker that is smaller, faster, and has a much reduced trusted computing base when compared to Google's original analysis. The key to our approach is automatically generating the bulk of the analysis from a declarative description which we relate to a formal model of a subset of the x86 instruction set architecture. The x86 model, developed in Coq, is of independent interest and should be usable for a wide range of machine-level verification tasks.
The equational theory of pomsets Pomsets have been introduced as a model of concurrency. Since a pomset is a string in which the total order has been relaxed to be a partial order, in this paper we view them as a generalization of strings, and investigate their algebraic properties. In particular, we investigate the axiomatic properties of pomsets, sets of pomsets and ideals of pomsets, under such operations as concatenation, parallel composition, union and their associated closure operations. We find that the equational theory of sets, pomsets under concatenation, parallel composition and union is finitely axiomatizable, whereas the theory of languages under the analogous operations is not. A similar result is obtained for ideals of pomsets, which incorporate the notion of subsumption which is also known as augmentation. Finally, we show that the addition of any closure operation (parallel or serial) leads to nonfinite axiomatizability of the resulting equational theory.
Codejail: Application-Transparent Isolation of Libraries with Tight Program Interactions.
Conditional Speculation: An Effective Approach to Safeguard Out-of-Order Execution Against Spectre Attacks Speculative execution side-channel vulnerabilities such as Spectre reveal that conventional architecture designs lack security consideration. This paper proposes a software transparent defense mechanism, named as Conditional Speculation, against Spectre vulnerabilities found on traditional out-of-order microprocessors. It introduces the concept of security dependence to mark speculative memory instructions which could leak information with potential security risk. More specifically, security-dependent instructions are detected and marked with suspect speculation flags in the Issue Queue. All the instructions can be speculatively issued for execution in accordance with the classic out-of-order pipeline. For those instructions with suspect speculation flags, they are considered as safe instructions if their speculative execution will not refill new cache lines with unauthorized privilege data. Otherwise, they are considered as unsafe instructions and thus not allowed to execute speculatively. To reduce the performance impact from not executing unsafe instructions speculatively, we investigate two filtering mechanisms, Cachehit based Hazard Filter and Trusted Page Buffer based Hazard Filter to filter out false security hazards. Our design philosophy is to speculatively execute safe instructions to maintain the performance benefits of out-of-order execution while blocking the speculative execution of unsafe instructions for security consideration. We evaluate Conditional Speculation in terms of performance, security and area. The experimental results show that the hardware overhead is marginal and the performance overhead is minimal.
Speculative Probing: Hacking Blind in the Spectre Era To defeat ASLR or more advanced fine-grained and leakage-resistant code randomization schemes, modern software exploits rely on information disclosure to locate gadgets inside the victim's code. In the absence of such info-leak vulnerabilities, attackers can still hack blind and derandomize the address space by repeatedly probing the victim's memory while observing crash side effects, but doing so is only feasible for crash-resistant programs. However, high-value targets such as the Linux kernel are not crash-resistant. Moreover, the anomalously large number of crashes is often easily detectable. In this paper, we show that the Spectre era enables an attacker armed with a single memory corruption vulnerability to hack blind without triggering any crashes. Using speculative execution for crash suppression allows the elevation of basic memory write vulnerabilities into powerful speculative probing primitives that leak through microarchitectural side effects. Such primitives can repeatedly probe victim memory and break strong randomization schemes without crashes and bypass all deployed mitigations against Spectre-like attacks. The key idea behind speculative probing is to break Spectre mitigations using memory corruption and resurrect Spectre-style disclosure primitives to mount practical blind software exploits. To showcase speculative probing, we target the Linux kernel, a crash-sensitive victim that has so far been out of reach of blind attacks, mount end-to-end exploits that compromise the system with just-in-time code reuse and data-only attacks from a single memory write vulnerability, and bypass strong Spectre and strong randomization defenses. Our results show that it is crucial to consider synergies between different (Spectre vs. code reuse) threat models to fully comprehend the attack surface of modern systems.
Binsec/Rel: Efficient Relational Symbolic Execution for Constant-Time at Binary-Level The constant-time programming discipline (CT) is an efficient countermeasure against timing side-channel attacks, requiring the control flow and the memory accesses to be independent from the secrets. Yet, writing CT code is challenging as it demands to reason about pairs of execution traces (2-hypersafety property) and it is generally not preserved by the compiler, requiring binary-level analysis. Unfortunately, current verification tools for CT either reason at higher level (C or LLVM), or sacrifice bug-finding or bounded-verification, or do not scale. We tackle the problem of designing an efficient binary-level verification tool for CT providing both bug-finding and bounded-verification. The technique builds on relational symbolic execution enhanced with new optimizations dedicated to information flow and binary-level analysis, yielding a dramatic improvement over prior work based on symbolic execution. We implement a prototype, BINSEC/REL, and perform extensive experiments on a set of 338 cryptographic implementations, demonstrating the benefits of our approach in both bug-finding and bounded-verification. Using BINSEC/REL, we also automate a previous manual study of CT preservation by compilers. Interestingly, we discovered that gcc -O0 and backend passes of clang introduce violations of CT in implementations that were previously deemed secure by a state-of-the-art CT verification tool operating at LLVM level, showing the importance of reasoning at binary-level.
InSpectre: Breaking and Fixing Microarchitectural Vulnerabilities by Formal Analysis The recent Spectre attacks have demonstrated the fundamental insecurity of current computer microarchitecture. The attacks use features like pipelining, out-of-order and speculation to extract arbitrary information about the memory contents of a process. A comprehensive formal microarchitectural model capable of representing the forms of out-of-order and speculative behavior that can meaningfully be implemented in a high performance pipelined architecture has not yet emerged. Such a model would be very useful, as it would allow the existence and non-existence of vulnerabilities, and soundness of countermeasures to be formally established. This paper presents such a model targeting single core processors. The model is intentionally very general and provides an infrastructure to define models of real CPUs. It incorporates microarchitectural features that underpin all known Spectre vulnerabilities. We use the model to elucidate the security of existing and new vulnerabilities, as well as to formally analyze the effectiveness of proposed countermeasures. Specifically, we discover three new (potential) vulnerabilities, including a new variant of Spectre v4, a vulnerability on speculative fetching, and a vulnerability on out-of-order execution, and analyze the effectiveness of existing countermeasures including constant time and serializing instructions.
Efficient Cache Attacks on AES, and Countermeasures We describe several software side-channel attacks based on inter-process leakage through the state of the CPU's memory cache. This leakage reveals memory access patterns, which can be used for cryptanalysis of cryptographic primitives that employ data-dependent table lookups. The attacks allow an unprivileged process to attack other processes running in parallel on the same processor, despite partitioning methods such as memory protection, sandboxing, and virtualization. Some of our methods require only the ability to trigger services that perform encryption or MAC using the unknown key, such as encrypted disk partitions or secure network links. Moreover, we demonstrate an extremely strong type of attack, which requires knowledge of neither the specific plaintexts nor ciphertexts and works by merely monitoring the effect of the cryptographic process on the cache. We discuss in detail several attacks on AES and experimentally demonstrate their applicability to real systems, such as OpenSSL and Linux's dm-crypt encrypted partitions (in the latter case, the full key was recovered after just 800 writes to the partition, taking 65 milliseconds). Finally, we discuss a variety of countermeasures which can be used to mitigate such attacks.
Fog computing and its role in the internet of things Fog Computing extends the Cloud Computing paradigm to the edge of the network, thus enabling a new breed of applications and services. Defining characteristics of the Fog are: a) Low latency and location awareness; b) Wide-spread geographical distribution; c) Mobility; d) Very large number of nodes, e) Predominant role of wireless access, f) Strong presence of streaming and real time applications, g) Heterogeneity. In this paper we argue that the above characteristics make the Fog the appropriate platform for a number of critical Internet of Things (IoT) services and applications, namely, Connected Vehicle, Smart Grid, Smart Cities, and, in general, Wireless Sensors and Actuators Networks (WSANs).
File Transfer Protocol
The Essence of P2P: A Reference Architecture for Overlay Networks The success of the P2P idea has created a huge diversity of approaches, among which overlay networks, for example, Gnutella, Kazaa, Chord, Pastry, Tapestry, P-Grid, or DKS, have received specific attention from both developers and researchers. A wide variety of algorithms, data structures, and architectures have been proposed. The terminologies and abstractions used, however, have become quite inconsistent since the P2P paradigm has attracted people from many different communities, e.g., networking, databases, distributed systems, graph theory, complexity theory, biology, etc. In this paper we propose a reference model for overlay networks which is capable of modeling different approaches in this domain in a generic manner. It is intended to allow researchers and users to assess the properties of concrete systems, to establish a common vocabulary for scientific discussion, to facilitate the qualitative comparison of the systems, and to serve as the basis for defining a standardized API to make overlay networks interoperable.
Analysis and Design of Passive Polyphase Filters Passive RC polyphase filters (PPFs) are analyzed in detail in this paper. First, a method to calculate the output signals of an n-stage PPF is presented. As a result, all relevant properties of PPFs, such as amplitude and phase imbalance and loss, are calculated. The rules for optimal pole frequency planning to maximize the image-reject ratio provided by a PPF are given. The loss of PPF is divided into two factors, namely the intrinsic loss caused by the PPF itself and the loss caused by termination impedances. Termination impedances known a priori can be used to derive such component values, which minimize the overall loss. The effect of parasitic capacitance and component value deviation are analyzed and discussed. The method of feeding the input signal to the first PPF stage affects the mechanisms of the whole PPF. As a result, two slightly different PPF topologies can be distinguished, and they are separately analyzed and compared throughout this paper. A design example is given to demonstrate the developed design procedure.
Integration operators for generating RDF/OWL-based user defined mediator views in a grid environment Research and development activities relating to the grid have generally focused on applications where data is stored in files. However, many scientific and commercial applications are highly dependent on Information Servers (ISs) for storage and organization of their data. A data-information system that supports operations on multiple information servers in a grid environment is referred to as an interoperable grid system. Different perceptions by end-users of interoperable systems in a grid environment may lead to different reasons for integrating data. Even the same user might want to integrate the same distributed data in various ways to suit different needs, roles or tasks. Therefore multiple mediator views are needed to support this diversity. This paper describes our approach to supporting semantic interoperability in a heterogeneous multi-information server grid environment. It is based on using Integration Operators for generating multiple semantically rich RDF/OWL-based user defined mediator views above the grid participating ISs. These views support different perceptions of the distributed and heterogeneous data available. A set of grid services are developed for the implementation of the mediator views.
Power Efficiency Comparison of Event-Driven and Fixed-Rate Signal Conversion and Compression for Biomedical Applications Energy-constrained biomedical recording systems need power-efficient data converters and good signal compression in order to meet the stringent power consumption requirements of many applications. In literature today, typically a SAR ADC in combination with digital compression is used. Recently, alternative event-driven sampling techniques have been proposed that incorporate compression in the ADC, such as level-crossing A/D conversion. This paper describes the power efficiency analysis of such level-crossing ADC (LCADC) and the traditional fixed-rate SAR ADC with simple compression. A model for the power consumption of the LCADC is derived, which is then compared to the power consumption of the SAR ADC with zero-order hold (ZOH) compression for multiple biosignals (ECG, EMG, EEG, and EAP). The LCADC is more power efficient than the SAR ADC up to a cross-over point in quantizer resolution (for example 8 bits for an EEG signal). This cross-over point decreases with the ratio of the maximum to average slope in the signal of the application. It also changes with the technology and design techniques used. The LCADC is thus suited for low to medium resolution applications. In addition, the event-driven operation of an LCADC results in fewer data to be transmitted in a system application. The event-driven LCADC without timer and with single-bit quantizer achieves a reduction in power consumption at system level of two orders of magnitude, an order of magnitude better than the SAR ADC with ZOH compression. At system level, the LCADC thus offers a big advantage over the SAR ADC.
1.11
0.1
0.1
0.1
0.1
0.05
0.025
0.000333
0
0
0
0
0
0
A 12-Bit 20-kS/s 640-nW SAR ADC With a VCDL-Based Open-Loop Time-Domain Comparator This brief presents a 12-bit ultra-low-power asynchronous successive approximation register (SAR) analog-to-digital converter (ADC). A voltage-controlled delay line (VCDL) based open-loop time-domain comparator is proposed and analyzed, achieving low noise and ultra-low power performance. By employing the mixed switching scheme, the segmented capacitive digital-to-analog converter (CDAC) arrays as well as the synchronous data-weighted averaging (DWA) calibration block, the proposed SAR ADC can operate from 1.8 V down to 0.8 V at 20–200 kS/s. The designed ADC is fabricated in a 0.18- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu {\mathrm{ m}}$ </tex-math></inline-formula> CMOS process and the measurement results show the proposed SAR ADC achieves an SNDR of 65-dB with power consumption of 647 nW from a 0.8 V power supply at 20 kS/s.
Theory and Implementation of an Analog-to-Information Converter using Random Demodulation The new theory of compressive sensing enables direct analog-to-information conversion of compressible signals at sub-Nyquist acquisition rates. The authors develop new theory, algorithms, performance bounds, and a prototype implementation for an analog-to-information converter based on random demodulation. The architecture is particularly apropos for wideband signals that are sparse in the time-frequency plane. End-to-end simulations of a complete transistor-level implementation prove the concept under the effect of circuit nonidealities.
Ultra-High Input Impedance, Low Noise Integrated Amplifier for Noncontact Biopotential Sensing Noncontact electrocardiogram/electroencephalogram/electromyogram electrodes, which operate primarily through capacitive coupling, have been extensively studied for unobtrusive physiological monitoring. Previous implementations using discrete off-the-shelf amplifiers have been encumbered by the need for manually tuned input capacitance neutralization networks and complex dc-biasing schemes. We have designed and fabricated a custom integrated noncontact sensor front-end amplifier that fully bootstraps internal and external parasitic impedances. DC stability without the need for external large valued resistances is ensured by an ac bootstrapped, low-leakage, on-chip biasing network. The amplifier achieves, without neutralization, input impedance of 60 fF 50 T , input referred noise of 0.05 fA/ and 200 nV/ at 1 Hz, and power consumption of 1.5 A per channel at 3.3 V supply voltage. Stable frequency response is demonstrated below 0.05 Hz with electrode coupling capacitances as low as 0.5 pF.
A high input impedance low-noise instrumentaion amplifier with JFET input This paper presents a high input impedance instrumentation amplifier with low-noise low-power operation. JFET input-pair is employed instead of CMOS to significantly reduce the flicker noise. This amplifier features high input impedance (15.3 GΩ∥1.39 pF) by using current feedback technique and JFET input. This amplifier has a mid-band gain of 39.9 dB, and draws 3.65 μA from a 2.8-V supply and exhibits an input-referred noise of 3.81 μVrms integrated from 10 mHz to 100 kHz, corresponding to a noise efficiency factor (NEF) of 3.23.
A 0.5–1.1-V Adaptive Bypassing SAR ADC Utilizing the Oscillation-Cycle Information of a VCO-Based Comparator A successive approximation register (SAR) analog-to-digital converter (ADC) with a voltage-controlled oscillator (VCO)-based comparator is presented in this paper. The relationship between the input voltage and the number of oscillation cycles (NOC) to reach a VCO-comparator decision is explored, implying an inherent coarse quantization in parallel with the normal comparison. The NOC as a design parameter is introduced and analyzed with noise, metastability, and tradeoff considerations. The NOC is exploited to bypass a certain number of SAR cycles for higher power efficiency of VCO-based SAR ADCs. To cope with the process, voltage, and temperature (PVT) variations, an adaptive bypassing technique is proposed, tracking and correcting window sizes in the background. Fabricated in a 40-nm CMOS process, the ADC achieves a peak effective number of bits of 9.71 b at 10 MS/s. Walden figure of merit (FoM) of 2.4–6.85 fJ/conv.-step is obtained over a wide range of supply voltages and sampling rates. Measurement has been carried out under typical, fast-fast, and slow-slow process corners and 0 °C–100 °C temperature range, showing that the proposed ADC is robust over PVT variations without any off-chip calibration or tuning.
An Energy-Efficient SAR ADC With Event-Triggered Error Correction This brief presents an energy-efficient fully differential 10-bit successive approximation register (SAR) analog-to-digital converter (ADC) with a resolution of 10 bits and a sampling rate of 320 kS/s. The optimal capacitor split and bypass number is analyzed to achieve the highest switching energy efficiency. The common-mode voltage level remains constant during the MSB-capacitor switching cycles. To minimize nonlinearity due to charge averaging voltage offset or DAC array mismatch, an event-triggered error correction method is employed as a redundant cycle for detecting digital code errors within 1 least significant bit (LSB). A test chip was fabricated using the 180-nm CMOS process and occupied a 0.0564-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> core area. Under a regular 0.65-V supply voltage, the ADC achieved an effective number of bits of 9.61 bits and a figure of merit (FOM) of 6.38 fJ/conversion step, with 1.6- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${ \mu }\text{W}$ </tex-math></inline-formula> power dissipation for a low-frequency input. The measured differential and integral nonlinearity results are within 0.30 LSB and 0.43 LSB, respectively.
A 1V 3.5 μW Bio-AFE With Chopper-Capacitor-Chopper Integrator-Based DSL and Low Power GM-C Filter This brief presents a low-noise, low-power bio-signal acquisition analog front-end (Bio-AFE). It mainly includes a capacitively coupled chopper-stabilized instrumentation amplifier (CCIA), a programmable gain amplifier (PGA), a low-pass filter (LPF), and a successive approximation analog to digital converter (SAR ADC). A chopper-capacitor-chopper integrator based DC servo loop (C3IB-DSL...
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
Building efficient wireless sensor networks with low-level naming In most distributed systems, naming of nodes for low-level communication leverages topological location (such as node addresses) and is independent of any application. In this paper, we investigate an emerging class of distributed systems where low-level communication does not rely on network topological location. Rather, low-level communication is based on attributes that are external to the network topology and relevant to the application. When combined with dense deployment of nodes, this kind of named data enables in-network processing for data aggregation, collaborative signal processing, and similar problems. These approaches are essential for emerging applications such as sensor networks where resources such as bandwidth and energy are limited. This paper is the first description of the software architecture that supports named data and in-network processing in an operational, multi-application sensor-network. We show that approaches such as in-network aggregation and nested queries can significantly affect network traffic. In one experiment aggregation reduces traffic by up to 42% and nested queries reduce loss rates by 30%. Although aggregation has been previously studied in simulation, this paper demonstrates nested queries as another form of in-network processing, and it presents the first evaluation of these approaches over an operational testbed.
Formal verification in hardware design: a survey In recent years, formal methods have emerged as an alternative approach to ensuring the quality and correctness of hardware designs, overcoming some of the limitations of traditional validation techniques such as simulation and testing.There are two main aspects to the application of formal methods in a design process: the formal framework used to specify desired properties of a design and the verification techniques and tools used to reason about the relationship between a specification and a corresponding implementation. We survey a variety of frameworks and techniques proposed in the literature and applied to actual designs. The specification frameworks we describe include temporal logics, predicate logic, abstraction and refinement, as well as containment between &ohgr;-regular languages. The verification techniques presented include model checking, automata-theoretic techniques, automated theorem proving, and approaches that integrate the above methods.In order to provide insight into the scope and limitations of currently available techniques, we present a selection of case studies where formal methods were applied to industrial-scale designs, such as microprocessors, floating-point hardware, protocols, memory subsystems, and communications hardware.
Exploiting availability prediction in distributed systems Loosely-coupled distributed systems have significant scale and cost advantages over more traditional architectures, but the availability of the nodes in these systems varies widely. Availability modeling is crucial for predicting per-machine resource burdens and understanding emergent, system-wide phenomena. We present new techniques for predicting availability and test them using traces taken from three distributed systems. We then describe three applications of availability prediction. The first, availability-guided replica placement, reduces object copying in a distributed data store while increasing data availability. The second shows how availability prediction can improve routing in delay-tolerant networks. The third combines availability prediction with virus modeling to improve forecasts of global infection dynamics.
Chameleon: a dual-mode 802.11b/Bluetooth receiver system design In this paper, an approach to map the Bluetooth and 802.11b standards specifications into an architecture and specifications for the building blocks of a dual-mode direct conversion receiver is proposed. The design procedure focuses on optimizing the performance in each operating mode while attaining an efficient dual-standard solution. The impact of the expected receiver nonidealities and the characteristics of each building block are evaluated through bit-error-rate simulations. The proposed receiver design is verified through a fully integrated implementation from low-noise amplifier to analog-to-digital converter using IBM 0.25-μm BiCMOS technology. Experimental results from the integrated prototype meet the specifications from both standards and are in good agreement with the target sensitivity.
An efficient low-cost fixed-point digital down converter with modified filter bank In radar system, as the most important part of IF radar receiver, digital down converter (DDC) extracts the baseband signal needed from modulated IF signal, and down-samples the signal with decimation factor of 20. This paper proposes an efficient low-cost structure of DDC, including NCO, mixer and a modified filter bank. The modified filter bank adopts a high-efficiency structure, including a 5-stage CIC filter, a 9-tap CFIR filter and a 15-tap HB filter, which reduces the complexity and cost of implementation compared with the traditional filter bank. Then an optimized fixed-point programming is designed in order to implement DDC on fixed-point DSP or FPGA. The simulation results show that the proposed DDC achieves an expectant specification in application of IF radar receiver.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.2
0.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
A 69.5 mW 20 GS/s 6b Time-Interleaved ADC With Embedded Time-to-Digital Calibration in 32 nm CMOS SOI A 20 GS/s 6b time-interleaved ADC is implemented in 32 nm CMOS SOI with an embedded time-to-digital converter to sense timing skew, and the randomness of process mismatch is exploited to compensate for the clock misalignment and dynamic offset errors of comparators that occur during high-speed operation. To achieve low-power consumption at high-speed operation with small-size transistors, a low-complexity on-chip calibration reduces gain, offset, and delay mismatches in background. With the timing skew calibration, the spurs due to clock misalignment are reduced by 20 dB. The proposed ADC achieves an SNDR of 30.7 dB at Nyquist frequency and consumes only 69.5 mW with a figure-of-merit of 124 J/conv-step.
A 4-Bit, 1.6 GS/s Low Power Flash ADC, Based on Offset Calibration and Segmentation A low power 4-bit, 1.6 GS/s flash ADC is presented. A new power reduction technique which masks the unused blocks in a semi-pipeline chain of latches and encoders is introduced. The proposed circuit determines the unused blocks based on a pre-sensing of the signal. Moreover, a reference voltage generator with very low static power dissipation is used. Novel techniques to reduce the sensitivity to dynamic noise are proposed to suppress the noise effects on the reference generator. The proposed circuit reduces the power consumption by 20 percent compared to the conventional structure when a Nyquist rate OFDM signal is applied. The INL and DNL of the converter are smaller than 0.3 LSB after calibration. The converter offers 3.8 effective number of bits (ENOB) at 1.6 GS/s sampling rate with a low frequency input signal and more than 1.8 GHz effective resolution bandwidth (ERBW) at this sampling rate. The converter consumes mere 15.5 mW from a 1.8 V supply, yielding an FoM of 695 fJ/conversion.step and occupies 0.3 mm2 in a 0.18 μm standard CMOS process.
Integration of Array Antennas in Chip Package for 60-GHz Radios. This paper discusses the integration of array antennas in chip packages for highly integrated 60-GHz radios. First, we evaluate fixed-beam array antennas, showing that most of them suffer from feed network complexity and require sophisticated process techniques to achieve enhanced performance. We describe the grid array antenna and show that is a good choice for fixed-beam array antenna applicatio...
IEEE 802.11ad: directional 60 GHz communication for multi-Gigabit-per-second Wi-Fi [Invited Paper] With the ratification of the IEEE 802.11ad amendment to the 802.11 standard in December 2012, a major step has been taken to bring consumer wireless communication to the millimeter wave band. However, multi-gigabit-per-second throughput and small interference footprint come at the price of adverse signal propagation characteristics, and require a fundamental rethinking of Wi-Fi communication principles. This article describes the design assumptions taken into consideration for the IEEE 802.11ad standard and the novel techniques defined to overcome the challenges of mm-Wave communication. In particular, we study the transition from omnidirectional to highly directional communication and its impact on the design of IEEE 802.11ad.
A 17 mW 3-to-5 GHz Duty-Cycled Vital Sign Detection Radar Transceiver With Frequency Hopping and Time-Domain Oversampling. This paper presents a low power interference-robust radar transceiver architecture for noncontact vital sign detection and mobile healthcare applications. A duty-cycled transceiver design is proposed to significantly reduce power consumption of front-end circuits. Occupying 3-to-5 GHz band with four 500 MHz sub-channels, the radar mitigates the narrowband interference (NBI) problem with the freque...
Implementation of Low-Power 6-8 b 30-90 GS/s Time-Interleaved ADCs With Optimized Input Bandwidth in 32 nm CMOS. A model for voltage-based time-interleaved sampling is introduced with two implementations of highly interleaved analog-to-digital converters (ADCs) for 100 Gb/s communication systems. The model is suitable for ADCs where the analog input bandwidth is of concern and enables a tradeoff between different architectures with respect to the analog input bandwidth, the hold time of the sampled signal, a...
A 243-mW 1.25–56-Gb/s Continuous Range PAM-4 42.5-dB IL ADC/DAC-Based Transceiver in 7-nm FinFET This article presents a compact analog-to-digital converter (ADC)/digital-to-analog converter (DAC) digital signal processing (DSP)-based long reach (LR) transceiver in 7-nm FinFET technology that operates seamlessly from 3.5—56 Gb/s in pulse-amplitude modulation (PAM-4) [from 1.25 to 28 Gb/s in non-return to zero (NRZ) mode] and consumes only 243 mW at 56 Gb/s. The receiver (RX) front end consists of a two-stage continuous-time linear equalizer (CTLE), a 40-way time-interleaved (TI) successive approximation register (SAR)-ADC, a DSP equalizer containing a 17-tap feed-forward equalizer (FFE) working concurrently with a one-tap speculative decision feedback equalizer (DFE) and a reflection canceling FFE, which implements four individually roaming taps. Clock recovery is achieved on a dedicated low latency path consisting of a five-tap FFE, slicer, time error detector (TED), and loop filter driving a dedicated <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">LC</italic> —digital-controlled oscillator (DCO). The transmit section consists of a variety of pattern generators, a five-tap finite impulse response (FIR) section, and a terminated DAC as an analog transmitter. When working on a 42.5-dB-LR channel at 56 Gb/s PAM-4, the transceiver consumes 243 mW from the 0.9-V (analog) and 0.75-V (digital) supplies, corresponding to an efficiency of 4.3 pJ/b.
Wireless sensor networks: a survey This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed.
Multilevel k-way hypergraph partitioning In this paper, we present a new multilevel k-way hypergraph parti- tioning algorithm that substantially outperforms the existing state- of-the-art K-PM/LR algorithm for multi-way partitioning. both for optimizing local as well as global objectives. Experiments on the ISPD98 benchmark suite show that the partitionings produced by our scheme are on the average 15% to 23% better than those pro- duced by the K-PM/LR algorithm, both in terms of the hyperedge cut as well as the K 1 metric. Furthermore, our algorithm is sig- nificantly faster, requiring 4 to 5 times less time than that required by K-PM/LR.
The software radio architecture As communications technology continues its rapid transition from analog to digital, more functions of contemporary radio systems are implemented in software, leading toward the software radio. This article provides a tutorial review of software radio architectures and technology, highlighting benefits, pitfalls, and lessons learned. This includes a closer look at the canonical functional partitioning of channel coding into antenna, RF, IF, baseband, and bitstream segments. A more detailed look at the estimation of demand for critical resources is key. This leads to a discussion of affordable hardware configurations, the mapping of functions to component hardware, and related software tools. This article then concludes with a brief treatment of the economics and likely future directions of software radio technology
An architecture for survivable coordination in large distributed systems Coordination among processes in a distributed system can be rendered very complex in a large-scale system where messages may be delayed or lost and when processes may participate only transiently or behave arbitrarily, e.g., after suffering a security breach. In this paper, we propose a scalable architecture to support coordination in such extreme conditions. Our architecture consists of a collection of persistent data servers that implement simple shared data abstractions for clients, without trusting the clients or even the servers themselves. We show that, by interacting with these untrusted servers, clients can solve distributed consensus, a powerful and fundamental coordination primitive. Our architecture is very practical and we describe the implementation of its main components in a system called Fleet.
Minimum-Cost Data Delivery in Heterogeneous Wireless Networks With various wireless technologies developed, a ubiquitous and integrated architecture is envisioned for future wireless communication. An important optimization issue in such an integrated system is how to minimize the overall communication cost by intelligently utilizing the available heterogeneous wireless technologies while, at the same time, meeting the quality-of-service requirements of mobi...
OpenIoT: An open service framework for the Internet of Things The Internet of Things (IoT) has been a hot topic for the future of computing and communication. It will not only have a broad impact on our everyday life in the near future, but also create a new ecosystem involving a wide array of players such as device developers, service providers, software developers, network operators, and service users. In this paper, we present an open service framework for the Internet of Things, facilitating entrance into the IoT-related mass market, and establishing a global IoT ecosystem with the worldwide use of IoT devices and softwares. We expect that the open IoT service framework we proposed will play an important role in the widespread adoption of the Internet of Things in our everyday life, enhancing our quality of life with a large number of innovative applications and services, but also offering endless opportunities to all of the stakeholders in the world of information and communication technologies.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.1
0.1
0.1
0.1
0.1
0.05
0.02
0
0
0
0
0
0
0
An algorithm for calculating indistinguishable states and clusters in finite-state automata with partially observable transitions This paper presents a new algorithm for efficiently calculating pairs of indistinguishable states in finite-state automata with partially observable transitions. The need to obtain pairs of indistinguishable states occurs in several classes of problems related to control under partial observation, diagnosis, or distributed control with communication for discrete event systems. The algorithm obtains all indistinguishable state pairs in polynomial time in the number of states and events in the system. Another feature of the algorithm is the grouping of states into clusters and the identification of indistinguishable cluster pairs. Clusters can be employed to solve control problems for partially observed systems.
Calculation of Siphons and Minimal Siphons in Petri Nets Based on Semi-Tensor Product of Matrices. In this paper, we address the problems of enumerating siphons and minimal siphons in ordinary Petri nets (PNs) by resorting to the semi-tensor product (STP) of matrices. First, a matrix equation, called the siphon equation (SE), is established by using STP. Second, an algorithm is proposed to calculate all siphons in ordinary PNs. An example is presented to illustrate the theoretical results and show that the proposed method is more effective than other existing methods in calculating all siphons of PNs. Third, an efficient recursion algorithm is also proposed, which can be applied to computing all minimal siphons for any ordinary PNs. Last, some results on the computational complexity of the proposed algorithms, in this paper, are provided, as well as experimental results.
Deadlock avoidance in flexible manufacturing systems using finite automata A distinguishing feature of a flexible manufacturing system (FMS) is the ability to perform multiple tasks in one machine or workstation (alternative machining) and the ability to process parts according to more than one sequence of operations (alternative sequencing). In this paper, we address the issue of deadlock avoidance in systems having these characteristics. A deadlock-free and maximally permissive control policy that incorporates this flexibility is developed based on finite automata models of part process plans and the FMS. The resulting supervisory controller is used for dynamic evaluation of deadlock avoidance based on the remaining processing requirements of the parts
Supervisor Synthesis for Mealy Automata With Output Functions: A Model Transformation Approach. Recently, Ushio and Takai have proposed a Mealy-automata-based framework to study the non-blocking supervisory control problem under partial observation. This framework can handle observation uncertainties by taking both state-dependent observations and nondeterministic outputs into account. In this technical note, we propose a model-transformation-based approach to solve the supervisor synthesis problem in this framework. First, we propose a transformation algorithm that transforms the non-blocking supervisor synthesis problem for Mealy automata to a conventional supervisory synthesis problem under partial observation, which can be solved effectively by an existing algorithm. Then we show that the supervisor synthesized for the transformed problem indeed solves the original problem. Our results bridge the gap between the conventional supervisory control framework under partial observation and the recently proposed Mealy automata framework where nondeterministic output function is used.
Observability of Boolean networks via set controllability approach Controllability and observability of Boolean control networks (BCNs) are two fundamental properties. But verification of the latter is much harder than the former. This paper considers the observability of BCNs via controllability. First, set controllability is proposed, and a necessary and sufficient condition is obtained. Then a technique is developed to convert observability into an equivalent set controllability problem. Using the result for set controllability, a necessary and sufficient condition is also obtained for the observability of BCNs.
Epidemic Propagation With Positive and Negative Preventive Information in Multiplex Networks We propose a novel epidemic model based on two-layered multiplex networks to explore the influence of positive and negative preventive information on epidemic propagation. In the model, one layer represents a social network with positive and negative preventive information spreading competitively, while the other one denotes the physical contact network with epidemic propagation. The individuals who are aware of positive prevention will take more effective measures to avoid being infected than those who are aware of negative prevention. Taking the microscopic Markov chain (MMC) approach, we analytically derive the expression of the epidemic threshold for the proposed epidemic model, which indicates that the diffusion of positive and negative prevention information, as well as the topology of the physical contact network have a significant impact on the epidemic threshold. By comparing the results obtained with MMC and those with the Monte Carlo (MC) simulations, it is found that they are in good agreement, but MMC can well describe the dynamics of the proposed model. Meanwhile, through extensive simulations, we demonstrate the impact of positive and negative preventive information on the epidemic threshold, as well as the prevalence of infectious diseases. We also find that the epidemic prevalence and the epidemic outbreaks can be suppressed by the diffusion of positive preventive information and be promoted by the diffusion of negative preventive information.
On observability of discrete-event systems The observability of discrete-event systems is investigated. A discrete-event system G is modeled as the controlled generator of a formal language Lm(G) in the framework of Ramadge and Wonham. To control G, a supervisor S is developed whose action is to enable and disable the controllable events of G according to a record of occurrences of the observable events of G, in such a way that the resulting closed-loop system obeys some prespecified operating rules embodied in a given language K. A necessary and sufficient condition is found for the existence of a supervisor S such that Lm (S/G) = K. Based on this condition, a solution of the supervisory control and observation problem (SCOP) is obtained. Two examples are provided.
Computing size-independent matrix problems on systolic array processors A methodology to transform dense to band matrices is presented in this paper. This transformation, is accomplished by triangular blocks partitioning, and allows the implementation of solutions to problems with any given size, by means of contraflow systolic arrays, originally proposed by H.T. Kung. Matrix-vector and matrix-matrix multiplications are the operations considered here.The proposed transformations allow the optimal utilization of processing elements (PEs) of the systolic array when dense matrix are operated. Every computation is made inside the array by using adequate feedback. The feedback delay time depends only on the systolic array size.
Distributed Subgradient Methods For Multi-Agent Optimization We study a distributed computation model for optimizing a sum of convex objective functions corresponding to multiple agents. For solving this (not necessarily smooth) optimization problem, we consider a subgradient method that is distributed among the agents. The method involves every agent minimizing his/her own objective function while exchanging information locally with other agents in the network over a time-varying topology. We provide convergence results and convergence rate estimates for the subgradient method. Our convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy.
Filtering by Aliasing In this manuscript we describe a fundamentally novel approach to the design of anti-aliasing filters. The approach, termed Filtering by Aliasing, incorporates the frequency-domain aliasing operation itself into the filtering task. The spectral content is spread with a periodic mixer and weighted with a simple analog filter before it aliases at the sampler. By designing the system according to the formulations presented in this manuscript, the sampled output will have been subjected to sharp, highly programmable anti-alias filtering. This manuscript describes the proposed Filtering by Aliasing idea, the effective programmable anti-aliasing filter, its design, and its range of frequency responses. The manuscript also addresses the implementation sensitivities of the proposed Filtering by Aliasing approach and provides a performance comparison against existing techniques in the context of reconfigurable anti-alias filtering.
A highly efficient domain-programmable parallel architecture for iterative LDPCC decoding We present a domain-programmable (code-independent) parallel architecture for efficiently implementing iterative probabilistic decoding of LDPC codes. The architecture is based on distributed computing and message passing. The exploited parallelism was found to be communication limited. To increase the utilization of the computational resources, we separate the routing process and state management functionalities performed by physical nodes from computation functionalities performed by function units that can be shared by multiple physical nodes. Simulation results show that the proposed architecture leads to improvements in FU utilization by 251%, 116%, and 209% compared to a hypothetical fully parallel custom implementation, a fully sequential implementation, and a proprietary FPGA custom implementation, respectively, that all use the same core FU design. Compared to an implementation on a shared-memory general-purpose parallel machine, the proposed architecture exhibits 75.6% improvement in scalability. We also introduce a novel low cost store-and-forward routing algorithm for deadlock avoidance in torus networks
Extermal cover times for random walks on trees
PuDianNao: A Polyvalent Machine Learning Accelerator Machine Learning (ML) techniques are pervasive tools in various emerging commercial applications, but have to be accommodated by powerful computer systems to process very large data. Although general-purpose CPUs and GPUs have provided straightforward solutions, their energy-efficiencies are limited due to their excessive supports for flexibility. Hardware accelerators may achieve better energy-efficiencies, but each accelerator often accommodates only a single ML technique (family). According to the famous No-Free-Lunch theorem in the ML domain, however, an ML technique performs well on a dataset may perform poorly on another dataset, which implies that such accelerator may sometimes lead to poor learning accuracy. Even if regardless of the learning accuracy, such accelerator can still become inapplicable simply because the concrete ML task is altered, or the user chooses another ML technique. In this study, we present an ML accelerator called PuDianNao, which accommodates seven representative ML techniques, including k-means, k-nearest neighbors, naive bayes, support vector machine, linear regression, classification tree, and deep neural network. Benefited from our thorough analysis on computational primitives and locality properties of different ML techniques, PuDianNao can perform up to 1056 GOP/s (e.g., additions and multiplications) in an area of 3.51 mm^2, and consumes 596 mW only. Compared with the NVIDIA K20M GPU (28nm process), PuDianNao (65nm process) is 1.20x faster, and can reduce the energy by 128.41x.
Power Efficiency Comparison of Event-Driven and Fixed-Rate Signal Conversion and Compression for Biomedical Applications Energy-constrained biomedical recording systems need power-efficient data converters and good signal compression in order to meet the stringent power consumption requirements of many applications. In literature today, typically a SAR ADC in combination with digital compression is used. Recently, alternative event-driven sampling techniques have been proposed that incorporate compression in the ADC, such as level-crossing A/D conversion. This paper describes the power efficiency analysis of such level-crossing ADC (LCADC) and the traditional fixed-rate SAR ADC with simple compression. A model for the power consumption of the LCADC is derived, which is then compared to the power consumption of the SAR ADC with zero-order hold (ZOH) compression for multiple biosignals (ECG, EMG, EEG, and EAP). The LCADC is more power efficient than the SAR ADC up to a cross-over point in quantizer resolution (for example 8 bits for an EEG signal). This cross-over point decreases with the ratio of the maximum to average slope in the signal of the application. It also changes with the technology and design techniques used. The LCADC is thus suited for low to medium resolution applications. In addition, the event-driven operation of an LCADC results in fewer data to be transmitted in a system application. The event-driven LCADC without timer and with single-bit quantizer achieves a reduction in power consumption at system level of two orders of magnitude, an order of magnitude better than the SAR ADC with ZOH compression. At system level, the LCADC thus offers a big advantage over the SAR ADC.
1.122
0.122
0.12
0.12
0.12
0.12
0.046857
0
0
0
0
0
0
0
Speculative Taint Tracking (STT): A Comprehensive Protection for Speculatively Accessed Data Speculative execution attacks present an enormous security threat, capable of reading arbitrary program data under malicious speculation, and later exfiltrating that data over microarchitectural covert channels. Since these attacks first rely on being able to read arbitrary data (potential secrets), a conservative approach to defeat all attacks is to delay the execution of instructions that read those secrets, until those instructions become non-speculative. This paper's premise is that it is safe to execute and selectively forward the results of speculative instructions that read secrets, which improves performance, as long as we can prove that the forwarded results do not reach potential covert channels. We propose a comprehensive hardware protection based on this idea, called Speculative Taint Tracking (STT), capable of protecting all speculatively accessed data. Our work addresses two key challenges. First, to safely selectively forward secrets, we must understand what instruction(s) can form covert channels. We provide a comprehensive study of covert channels on speculative microarchitectures, and use this study to develop hardware mechanisms that block each class of channel. Along the way, we find new classes of covert channels related to implicit flow on speculative machines. Second, for performance, it is essential to disable protection on previously protected data, as soon as doing so is safe. We identify that the earliest time is when the instruction(s) producing the protected data become non-speculative, and design a novel microarchitecture for disabling protection at this moment. We provide an extensive formal analysis showing that STT enforces a novel form of non-interference, with respect to all speculatively accessed data. We further evaluate STT on 21 SPEC and 9 PARSEC workloads, and find it adds only 8.5%/14.5% overhead (depending on attack model) relative to an insecure machine, while reducing overhead by 4.7×/18.8× relative to a baseline secure scheme.
SPONGENT: a lightweight hash function This paper proposes spongent - a family of lightweight hash functions with hash sizes of 88 (for preimage resistance only), 128, 160, 224, and 256 bits based on a sponge construction instantiated with a present-type permutation, following the hermetic sponge strategy. Its smallest implementations in ASIC require 738, 1060, 1329, 1728, and 1950 GE, respectively. To our best knowledge, at all security levels attained, it is the hash function with the smallest footprint in hardware published so far, the parameter being highly technology dependent. spongent offers a lot of flexibility in terms of serialization degree and speed. We explore some of its numerous implementation trade-offs. We furthermore present a security analysis of spongent. Basing the design on a present-type primitive provides confidence in its security with respect to the most important attacks. Several dedicated attack approaches are also investigated.
Non-crypto Hardware Hash Functions for High Performance Networking ASICs Hash functions are vital in networking. Hash-based algorithms are increasingly deployed in mission-critical, high speed network devices. These devices will need small, quick, hardware hash functions to keep up with Internet growth. There are many hardware hash functions used in this situation, foremost among them CRC-32. We develop parametrized methods for evaluating hash function output quality so as to better compare similar hash functions. We use these methods to explore the quality of candidate hash functions, including CRC-32, H3 (with fixed seed), MD5 and others. We also propose optimized building blocks for hardware hash functions based on SP-networks. Given a size budget of 4K gates and only 1 cycle to compute the result, we demonstrate a 128 bit input, 64 bit output hash function built using this framework that ranks highly in our tests.
RECTANGLE: a bit-slice lightweight block cipher suitable for multiple platforms. In this paper, we propose a new lightweight block cipher named RECTANGLE. The main idea of the design of RECTANGLE is to allow lightweight and fast implementations using bit-slice techniques. RECTANGLE uses an SP-network. The substitution layer consists of 16 4 4 S-boxes in parallel. The permutation layer is composed of 3 rotations. As shown in this paper, RECTAN- GLE offers great performance in both hardware and software environment, which provides enough flexibility for different application scenario. The following are 3 main advantages of RECTANGLE. First, RECTANGLE is extremely hardware-friendly. For the 80-bit key version, a one-cycle-per-round parallel implementation only needs 1600 gates for a throughput of 246 Kbits/sec at 100 KHz clock and an energy efficiency of 3.0 pJ/bit. Second, RECTANGLE achieves a very competitive software speed among the existing lightweight block ciphers due to its bit-slice style. Using 128-bit SSE instruc- tions, a bit-slice implementation of RECTANGLE reaches an average encryption speed of about 3.9 cycles/byte for messages around 3000 bytes. Last, but not least, we propose new design criteria for the RECTANGLE S-box. Due to our careful selection of the S-box and the asymmetric design of the permutation layer, RECTANGLE achieves a very good security-performance tradeoff. Our extensive and deep security analysis shows that the highest number of rounds that we can attack, is 18 (out of 25).
Securing Branch Predictors with Two-Level Encryption Modern processors rely on various speculative mechanisms to meet performance demand. Branch predictors are one of the most important micro-architecture components to deliver performance. However, they have been under heavy scrutiny because of recent side-channel attacks. Branch predictors are indexed using the PC and recent branch histories. An adversary can manipulate these parameters to access and control the same branch predictor entry that a victim uses. Recent Spectre attacks exploit this to set up speculative-execution-based security attacks. In this article, we aim to mitigate branch predictor side-channels using two-level encryption. At the first level, we randomize the set-index by encrypting the PC using a per-context secret key. At the second level, we encrypt the data in each branch predictor entry. While periodic key changes make the branch predictor more secure, performance degradation can be significant. To alleviate performance degradation, we propose a practical set update mechanism that also considers parallelism in multi-banked branch predictors. We show that our mechanism exhibits only 1.0% and 0.2% performance degradation while changing keys every 10K and 50K cycles, respectively, which is much lower than other state-of-the-art approaches.
On the Classification of 4 Bit S-Boxes In this paper we classify all optimal 4 bit S-boxes. Remarkably, up to affine equivalence, there are only 16 different optimal S-boxes. This observation can be used to efficiently generate optimal S-boxes fulfilling additional criteria. One result is that an S-box which is optimal against differential and linear attacks is always optimal with respect to algebraic attacks as well. We also classify all optimal S-boxes up to the so called CCZ equivalence. We furthermore generated all S-boxes fulfilling the conditions on nonlinearity and uniformity for S-boxes used in the block cipher Serpent. Up to a slightly modified notion of equivalence, there are only 14 different S-boxes. Due to this small number it is not surprising that some of the S-boxes of the Serpent cipher are linear equivalent. Another advantage of our characterization is that it eases the highly non-trivial task of choosing good S-boxes for hardware dedicated ciphers a lot.
SafeSpec: Banishing the Spectre of a Meltdown with Leakage-Free Speculation. Speculative attacks, such as Spectre and Meltdown, target speculative execution to access privileged data and leak it through a side-channel. In this paper, we introduce (SafeSpec), a new model for supporting speculation in a way that is immune to the side-channel leakage by storing side effects of speculative instructions in separate structures until they commit. Additionally, we address the possibility of a covert channel from speculative instructions to committed instructions before these instructions are committed. We develop a cycle accurate model of modified design of an x86-64 processor and show that the performance impact is negligible.
Combining control-flow integrity and static analysis for efficient and validated data sandboxing In many software attacks, inducing an illegal control-flow transfer in the target system is one common step. Control-Flow Integrity (CFI) protects a software system by enforcing a pre-determined control-flow graph. In addition to providing strong security, CFI enables static analysis on low-level code. This paper evaluates whether CFI-enabled static analysis can help build efficient and validated data sandboxing. Previous systems generally sandbox memory writes for integrity, but avoid protecting confidentiality due to the high overhead of sandboxing memory reads. To reduce overhead, we have implemented a series of optimizations that remove sandboxing instructions if they are proven unnecessary by static analysis. On top of CFI, our system adds only 2.7% runtime overhead on SPECint2000 for sandboxing memory writes and adds modest 19% for sandboxing both reads and writes. We have also built a principled data-sandboxing verifier based on range analysis. The verifier checks the safety of the results of the optimizer, which removes the need to trust the rewriter and optimizer. Our results show that the combination of CFI and static analysis has the potential of bringing down the cost of general inlined reference monitors, while maintaining strong security.
How to share a secret In this paper we show how to divide data D into n pieces in such a way that D is easily reconstructable from any k pieces, but even complete knowledge of k - 1 pieces reveals absolutely no information about D. This technique enables the construction of robust key management schemes for cryptographic systems that can function securely and reliably even when misfortunes destroy half the pieces and security breaches expose all but one of the remaining pieces.
A Single–Chip 10-Band WCDMA/HSDPA 4-Band GSM/EDGE SAW-less CMOS Receiver With DigRF 3G Interface and ${+}$ 90 dBm IIP2 This paper describes the design and performance of a 90 nm CMOS SAW-less receiver with DigRF interface that supports 10 WCDMA bands (I, II, III, IV, V, VI, VIII, IX, X, XI) and 4 GSM bands (GSM850, EGSM900, DCS1800, PCS1900). The receiver is part of a single-chip SAW-less transceiver reference platform IC for mass-market smartphones, which has been designed to meet Category 10 HSDPA (High Speed Do...
A new class of asynchronous A/D converters based on time quantization This work is a contribution to a drastic change in standard signal processing chains. The main objective is to reduce the power consumption by one or two orders of magnitude. Integrated Smart Devices and Communicating Objects are application domains targeted by this work. In this context, we present a new class of Analog-to-Digital Converters (ADCs), based on an irregular sampling of the analog signal, and an asynchronous design. Because they are not conventional, a complete design methodology is presented. It determines their characteristics given the required effective number of bits and the analog signal properties. it is shown that our approach leads to a significant reduction in terms of hardware complexity and power consumption. A prototype has been designed for speech applications, using the STMicroelectronics 0.18-μm CMOS technology. Electrical simulations prove that the factor of merit is increased by more than one order of magnitude compared to synchronous Nyquist ADCs.
Blind adaptive estimation of modulation scheme for software defined radio A software defined radio (SDR) system is used to reconfigure all or many specifications such as modulation scheme, demodulation, coding etc. Therefore, supplementary information allow recognition of these specifications by the receiver is required. Since much redundant supplementary information reduces efficiency in transmission, we have aimed to carry out blind adaptive demodulation which, can adaptively demodulate without supplementary information. This paper proposes and investigates both a blind adaptive algorithm for estimating channel and that for estimating a modulation scheme. Computer simulation evaluates the proposed blind estimation algorithm
BarrierPoint: Sampled simulation of multi-threaded applications Sampling is a well-known technique to speed up architectural simulation of long-running workloads while maintaining accurate performance predictions. A number of sampling techniques have recently been developed that extend well-known single-threaded techniques to allow sampled simulation of multi-threaded applications. Unfortunately, prior work is limited to non-synchronizing applications (e.g., server throughput workloads); requires the functional simulation of the entire application using a detailed cache hierarchy which limits the overall simulation speedup potential; leads to different units of work across different processor architectures which complicates performance analysis; or, requires massive machine resources to achieve reasonable simulation speedups. In this work, we propose BarrierPoint, a sampling methodology to accelerate simulation by leveraging globally synchronizing barriers in multi-threaded applications. BarrierPoint collects microarchitecture-independent code and data signatures to determine the most representative inter-barrier regions, called barrierpoints. BarrierPoint estimates total application execution time (and other performance metrics of interest) through detailed simulation of these barrierpoints only, leading to substantial simulation speedups. Barrierpoints can be simulated in parallel, use fewer simulation resources, and define fixed units of work to be used in performance comparisons across processor architectures. Our evaluation of BarrierPoint using NPB and Parsec benchmarks reports average simulation speedups of 24.7× (and up to 866.6×) with an average simulation error of 0.9% and 2.9% at most. On average, BarrierPoint reduces the number of simulation machine resources needed by 78×.
A VCO-Based Nonuniform Sampling ADC Using a Slope-Dependent Pulse Generator This paper presents a voltage-controlled oscillator (VCO)-based nonuniform sampling analog-to-digital converter (ADC) as an alternative to the level-crossing (LC)-based converters for digitizing biopotential signals. This work aims to provide a good signal-to-noise-and-distortion ratio at a low average sampling rate. In the proposed conversion method, a slope-dependent pulse generation block is used to provide a variable sample rate adjusted according to the input signal's slope. Simulation results show that the introduced method meets a target reconstruction quality with a sampling rate approaching 92 Sps, while on the same MIT-BIH Arrhythmia N 106 ECG benchmark, the classic LC-based approach requires a sampling rate higher than 500 Sps. The benefits of the proposed method are more remarkable when the input signal is very noisy. The proposed ADC achieves a compression ratio close to 4, but with only 5.4% root-mean-square difference when tested using the MIT-BIH Arrhythmia Database.
1.072593
0.066667
0.066667
0.066667
0.066667
0.033333
0.016395
0.002222
0
0
0
0
0
0
Adaptive Fuzzy Backstepping Control for A Class of Nonlinear Systems With Sampled and Delayed Measurements This paper investigates the adaptive fuzzy backstepping control and H∞ performance analysis for a class of nonlinear systems with sampled and delayed measurements. In the control scheme, a fuzzy-estimator (FE) model is used to estimate the states of the controlled plant, while the fuzzy logic systems are used to approximate the unknown nonlinear functions in the nonlinear system. The controller is obtained based on the FE model by combining the backstepping technique with the classic adaptive fuzzy control method. In the stability analysis, all the signals in the closed-loop system are guaranteed to be semiglobally uniformly ultimately bounded (SUUB) and the outputs of the system are proven to converge to a small neighborhood of origin. Furthermore, the H∞ performance is investigated and the outputs of the closed-loop system are bounded in the H∞ sense. Two examples are given to illustrate the effectiveness of the proposed control scheme.
A Probabilistic Neural-Fuzzy Learning System for Stochastic Modeling A probabilistic fuzzy neural network (PFNN) with a hybrid learning mechanism is proposed to handle complex stochastic uncertainties. Fuzzy logic systems (FLSs) are well known for vagueness processing. Embedded with the probabilistic method, an FLS will possess the capability to capture stochastic uncertainties. Further enhanced with the neural learning, it will be able to work under time-varying stochastic environment. Integrated with a statistical process control (SPC) based monitoring method, the PFNN can maintain the robust modeling performance. Finally, the successful simulation demonstrates the modeling effectiveness of the proposed PFNN under the time-varying stochastic conditions.
Design of Fuzzy-Neural-Network-Inherited Backstepping Control for Robot Manipulator Including Actuator Dynamics This study presents the design and analysis of an intelligent control system that inherits the systematic and recursive design methodology for an n-link robot manipulator, including actuator dynamics, in order to achieve a high-precision position tracking with a firm stability and robustness. First, the coupled higher order dynamic model of an n-link robot manipulator is introduced briefly. Then, a conventional backstepping control (BSC) scheme is developed for the joint position tracking of the robot manipulator. Moreover, a fuzzy-neural-network-inherited BSC (FNNIBSC) scheme is proposed to relax the requirement of detailed system information to improve the robustness of BSC and to deal with the serious chattering that is caused by the discontinuous function. In the FNNIBSC strategy, the FNN framework is designed to mimic the BSC law, and adaptive tuning algorithms for network parameters are derived in the sense of the projection algorithm and Lyapunov stability theorem to ensure the network convergence as well as stable control performance. Numerical simulations and experimental results of a two-link robot manipulator that are actuated by dc servomotors are provided to justify the claims of the proposed FNNIBSC system, and the superiority of the proposed FNNIBSC scheme is also evaluated by quantitative comparison with previous intelligent control schemes.
Reactive Power Control of Three-Phase Grid-Connected PV System During Grid Faults Using Takagi–Sugeno–Kang Probabilistic Fuzzy Neural Network Control An intelligent controller based on the Takagi-Sugeno-Kang-type probabilistic fuzzy neural network with an asymmetric membership function (TSKPFNN-AMF) is developed in this paper for the reactive and active power control of a three-phase grid-connected photovoltaic (PV) system during grid faults. The inverter of the three-phase grid-connected PV system should provide a proper ratio of reactive power to meet the low-voltage ride through (LVRT) regulations and control the output current without exceeding the maximum current limit simultaneously during grid faults. Therefore, the proposed intelligent controller regulates the value of reactive power to a new reference value, which complies with the regulations of LVRT under grid faults. Moreover, a dual-mode operation control method of the converter and inverter of the three-phase grid-connected PV system is designed to eliminate the fluctuation of dc-link bus voltage under grid faults. Furthermore, the network structure, the online learning algorithm, and the convergence analysis of the TSKPFNN-AMF are described in detail. Finally, some experimental results are illustrated to show the effectiveness of the proposed control for the three-phase grid-connected PV system.
Discrete-Time Quasi-Sliding-Mode Control With Prescribed Performance Function and its Application to Piezo-Actuated Positioning Systems. In this paper, the constrained control problem of the prescribed performance control technique is discussed in discrete-time domain for single input-single output dynamical systems. The goal of this design is to maintain the tracking error trajectory in a predefined convergence zone described by a performance function in the presence of the uncertainties. In order to achieve this goal, the discret...
Sliding mode control for singularly perturbed Markov jump descriptor systems with nonlinear perturbation This paper develops a stochastic integral sliding mode control strategy for singularly perturbed Markov jump descriptor systems subject to nonlinear perturbation. The transition probabilities (TPs) for the system modes are considered to switch randomly within a finite set. We first present a novel mode and switch-dependent integral switching surface, based upon which the resulting sliding mode dynamics (SMD) only suffers from the unmatched perturbation that is not amplified in the Euclidean norm sense. To overcome the difficulty of synthesizing the nominal controller, we rewrite the SMD into the equivalent descriptor form. By virtue of the fixed-point principle and stochastic system theory, we give a rigorous proof for the existence and uniqueness of the solution and the mean-square exponential admissibility for the transformed SMD. A generalized framework that covers arbitrary switching and Markov switching of the TPs as special cases is further achieved. Then, by analyzing the stochastic reachability of the sliding motion, we synthesize a mode and switch-dependent SMC law. The adaptive technique is further integrated to estimate the unavailable boundaries of the matched perturbation. Finally, simulation results on an electronic circuit system confirm the validity and benefits of the developed control strategy.
Evolutionary Fuzzy Control and Navigation for Two Wheeled Robots Cooperatively Carrying an Object in Unknown Environments This paper presents a method that allows two wheeled, mobile robots to navigate unknown environments while cooperatively carrying an object. In the navigation method, a leader robot and a follower robot cooperatively perform either obstacle boundary following (OBF) or target seeking (TS) to reach a destination. The two robots are controlled by fuzzy controllers (FC) whose rules are learned through an adaptive fusion of continuous ant colony optimization and particle swarm optimization (AF-CACPSO), which avoids the time-consuming task of manually designing the controllers. The AF-CACPSO-based evolutionary fuzzy control approach is first applied to the control of a single robot to perform OBF. The learning approach is then applied to achieve cooperative OBF with two robots, where an auxiliary FC designed with the AF-CACPSO is used to control the follower robot. For cooperative TS, a rule for coordination of the two robots is developed. To navigate cooperatively, a cooperative behavior supervisor is introduced to select between cooperative OBF and cooperative TS. The performance of the AF-CACPSO is verified through comparisons with various population-based optimization algorithms for the OBF learning problem. Simulations and experiments verify the effectiveness of the approach for cooperative navigation of two robots.
On sampled-data fuzzy control design approach for T-S model-based fuzzy systems by using discretization approach. In this paper, a sampled-data fuzzy control design approach for Takagi-Sugeno (T-S) model-based fuzzy systems is proposed. Firstly, the T-S model-based fuzzy systems are approximated by T-S fuzzy model-based discrete systems with parameter uncertainties. Then a sufficient condition on the existence of a sampled-data fuzzy controller for the fuzzy systems is formulated in the form of linear matrix inequalities which are proposed in some symmetrical form. So the proposed fuzzy control design approach will be less conservative. And some problems in the existing literature can be solved. Finally, a simulation example is discussed to show the effectiveness of the obtained fuzzy control design approach in this paper.
Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs. This paper considers the problem of distributed optimization over time-varying graphs. For the case of undirected graphs, we introduce a distributed algorithm, referred to as DIGing, based on a combination of a distributed inexact gradient method and a gradient tracking technique. The DIGing algorithm uses doubly stochastic mixing matrices and employs fixed step sizes and, yet, drives all the agents' iterates to a global and consensual minimizer. When the graphs are directed, in which case the implementation of doubly stochastic mixing matrices is unrealistic, we construct an algorithm that incorporates the push-sum protocol into the DIGing structure, thus obtaining the Push-DIGing algorithm. Push-DIGing uses column stochastic matrices and fixed step-sizes, but it still converges to a global and consensual minimizer. Under the strong convexity assumption, we prove that the algorithms converge at R-linear (geometric) rates as long as the step sizes do not exceed some upper bounds. We establish explicit estimates for the convergence rates. When the graph is undirected it shows that DIGing scales polynomially in the number of agents. We also provide some numerical experiments to demonstrate the efficacy of the proposed algorithms and to validate our theoretical findings.
Disk Paxos We present an algorithm, called Disk Paxos, for implementing a reliable distributed system with a network of processors and disks. Like the original Paxos algorithm, Disk Paxos maintains consistency in the presence of arbitrary non-Byzantine faults. Progress can be guaranteed as long as a majority of the disks are available, even if all processors but one have failed.
Perception-Based Data Reduction and Transmission of Haptic Data in Telepresence and Teleaction Systems We present a novel approach for the transmission of haptic data in telepresence and teleaction systems. The goal of this work is to reduce the packet rate between an operator and a teleoperator without impairing the immersiveness of the system. Our approach exploits the properties of human haptic perception and is, more specifically, based on the concept of just noticeable differences. In our scheme, updates of the haptic amplitude values are signaled across the network only if the change of a haptic stimulus is detectable by the human operator. We investigate haptic data communication for a 1 degree-of-freedom (DoF) and a 3 DoF teleaction system. Our experimental results show that the presented approach is able to reduce the packet rate between the operator and teleoperator by up to 90% of the original rate without affecting the performance of the system.
Remote Stabilization Via Communication Networks With a Distributed Control Law In this paper we investigate the problem of remote stabilization via communication networks involving some time- varying delays of known average dynamics. This problem arises when the control law is remotely implemented and leads to the problem of stabilizing an open-loop unstable system with time-varying delay. We use a time-varying horizon predictor to design a stabilizing control law that sets the poles of the closed-loop system. The computation of the horizon of the predictor is investigated and the proposed control law explicitly takes into account an estimation of the average delay dynamics. The resulting closed loop system robustness with respect to some uncertainties on the delay estimation is also considered. Simulation results are finally presented. Index Terms— Networked control systems, stabilization with time-varying delays, state predictor.
Spiking Deep Convolutional Neural Networks for Energy-Efficient Object Recognition Deep-learning neural networks such as convolutional neural network (CNN) have shown great potential as a solution for difficult vision problems, such as object recognition. Spiking neural networks (SNN)-based architectures have shown great potential as a solution for realizing ultra-low power consumption using spike-based neuromorphic hardware. This work describes a novel approach for converting a deep CNN into a SNN that enables mapping CNN to spike-based hardware architectures. Our approach first tailors the CNN architecture to fit the requirements of SNN, then trains the tailored CNN in the same way as one would with CNN, and finally applies the learned network weights to an SNN architecture derived from the tailored CNN. We evaluate the resulting SNN on publicly available Defense Advanced Research Projects Agency (DARPA) Neovision2 Tower and CIFAR-10 datasets and show similar object recognition accuracy as the original CNN. Our SNN implementation is amenable to direct mapping to spike-based neuromorphic hardware, such as the ones being developed under the DARPA SyNAPSE program. Our hardware mapping analysis suggests that SNN implementation on such spike-based hardware is two orders of magnitude more energy-efficient than the original CNN implementation on off-the-shelf FPGA-based hardware.
A 1V 3.5 μW Bio-AFE With Chopper-Capacitor-Chopper Integrator-Based DSL and Low Power GM-C Filter This brief presents a low-noise, low-power bio-signal acquisition analog front-end (Bio-AFE). It mainly includes a capacitively coupled chopper-stabilized instrumentation amplifier (CCIA), a programmable gain amplifier (PGA), a low-pass filter (LPF), and a successive approximation analog to digital converter (SAR ADC). A chopper-capacitor-chopper integrator based DC servo loop (C3IB-DSL...
1.055573
0.05
0.05
0.05
0.05
0.05
0.013333
0.005333
0.000143
0
0
0
0
0
A 10-Bit 600-MS/s Time-Interleaved SAR ADC With Interpolation-Based Timing Skew Calibration. This brief presents a 10-bit 600 MS/s 4-channel time-interleaved (TI) successive approximation register analog-to-digital converter (ADC). A background calibration algorithm using Lagrange polynomial interpolation is introduced to calibrate timing skew. It consists of digital detection and adaptive derivative-based correction, employing low filter taps and resulting in hardware reduction. Two refe...
All-Digital Blind Background Calibration Technique for Any Channel Time-Interleaved ADC. This paper proposes a novel digital adaptive blind background calibration technique for the gain, timing skew, and offset mismatch errors in a time-interleaved analog-to-digital converter (TI-ADC). Based on the frequency-shifted basis functions generated only from the measured TI-ADC output, the three mismatch errors can be represented, extracted, and then subtracted from the TI-ADC output adaptiv...
Improved design of digital fractional-order differentiators using fractional sample delay. In this paper, a digital fractional-order differentiator (FOD) is designed by using fractional sample delay. To improve the design accuracy of conventional fractional differencing and Tustin design methods at high frequency regions, the integer delay is replaced by fractional sample delay. By using the well-documented finite-impulse-response Lagrange, infinite impulse response allpass, and Farrow ...
Digitally Enhanced Wideband I/Q Downconversion Receiver With 2-Channel Time-Interleaved ADCs The interesting concept of employing in-phase/quadrature (I/Q) downconversion together with time-interleaved analog-to-digital converters allows digitizing very wide instantaneous radio-frequency (RF) bandwidths (BWs) and grants enhanced flexibility in accessing and processing the RF spectrum. Such a structure also inevitably suffers from performance degradation due to the analog components' nonidealities, e.g., frequency response mismatches (FRMs), that ultimately lead to spurious mismatch components that limit the system's dynamic range. Available solutions developed for I/Q mismatches or time-interleaving mismatches alone are not compatible for correcting the FRM spurs in such joint I/Q time-interleaved converter (IQ-TIC) architecture. This brief proposes novel blind FRM identification and correction solutions that are able to suppress all the associated spurs in the considered wideband IQ-TIC architecture. The proposed digital correction solutions are tested and verified using measured hardware data obtained from an experimental platform, exhibiting good FRM spur correction performance with instantaneous BW on the order of 800 MHz. These developments pave the way toward 5G radio communication devices and systems where instantaneous BWs on the order of 1 GHz are envisioned at centimeter- and millimeter-wave frequency bands.
Low complexity digital background calibration algorithm for the correction of timing mismatch in time-interleaved ADCs. A low-complexity post-processing algorithm to estimate and compensate for timing skew error in a four-channel time-interleaved analog to digital converter (TIADC) is presented in this paper, together with its hardware implementation. The Lagrange interpolator is used as the reconstruction filter which alleviates online interpolator redesign by using a simplified representation of coefficients. Simulation results show that the proposed algorithm can suppress error tones for input signal frequency from 0 to 0.4fs. The proposed structure has, at least, 41% reduction in the number of required multipliers. Implementation of the algorithm, for a four-channel 10-bit TIADC, show that, for a 0.4fs input signal frequency, the Signal to Noise and Distortion Ratio (SNDR) and Spurious-Free Dynamic Range (SFDR) are improved 31.26 dB and 43.7 dB, respectively. Our proposed approximation technique does not degrade the performance of system, resulting in the same SNDR and SFDR as the exact coefficient values. In addition, the proposed structure provides an acceptable performance in the presence of wideband signals.
All-Digital Calibration of Timing Mismatch Error in Time-Interleaved Analog-to-Digital Converters. This paper presents an all-digital background calibration for timing mismatch in time-interleaved analog-to-digital converters (TI-ADCs). It combines digital adaptive timing mismatch estimation and digital derivative-based correction, achieving lower hardware cost and better suppression of timing mismatch tones than previous work. In addition, for the first time closed-form exact expressions for t...
Reconstruction of Two-Periodic Nonuniformly Sampled Band-Limited Signals Using a Discrete-Time Differentiator and a Time-Varying Multiplier This brief considers the problem of reconstructing a band-limited signal from its two-periodic nonuniformly spaced samples. We propose a novel reconstruction system where a finite-impulse response filter designed as differentiator followed by a time-varying multiplier recovers the uniformly spaced from the nonuniformly spaced samples. The system roughly doubles the signal-to-noise ratio with relat...
Wireless sensor networks: a survey This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed.
Multilevel k-way hypergraph partitioning In this paper, we present a new multilevel k-way hypergraph parti- tioning algorithm that substantially outperforms the existing state- of-the-art K-PM/LR algorithm for multi-way partitioning. both for optimizing local as well as global objectives. Experiments on the ISPD98 benchmark suite show that the partitionings produced by our scheme are on the average 15% to 23% better than those pro- duced by the K-PM/LR algorithm, both in terms of the hyperedge cut as well as the K 1 metric. Furthermore, our algorithm is sig- nificantly faster, requiring 4 to 5 times less time than that required by K-PM/LR.
The software radio architecture As communications technology continues its rapid transition from analog to digital, more functions of contemporary radio systems are implemented in software, leading toward the software radio. This article provides a tutorial review of software radio architectures and technology, highlighting benefits, pitfalls, and lessons learned. This includes a closer look at the canonical functional partitioning of channel coding into antenna, RF, IF, baseband, and bitstream segments. A more detailed look at the estimation of demand for critical resources is key. This leads to a discussion of affordable hardware configurations, the mapping of functions to component hardware, and related software tools. This article then concludes with a brief treatment of the economics and likely future directions of software radio technology
A Digital Requantizer With Shaped Requantization Noise That Remains Well Behaved After Nonlinear Distortion A major problem in oversampling digital-to-analog converters and fractional-N frequency synthesizers, which are ubiquitous in modern communication systems, is that the noise they introduce contains spurious tones. The spurious tones are the result of digitally generated, quantized signals passing through nonlinear analog components. This paper presents a new method of digital requantization called successive requantization, special cases of which avoids the spurious tone generation problem. Sufficient conditions are derived that ensure certain statistical properties of the quantization noise, including the absence of spurious tones after nonlinear distortion. A practical example is presented and shown to satisfy these conditions.
Modeling of software radio aspects by mapping of SDL and CORBA With the evolution of 3rd generation mobile communications standardization, the software radio concept has the potential to offer a pragmatic solution - a software implementation that allows the mobile terminal to adapt dynamically to its radio environment. The mapping of SDL and CORBA mechanisms is introduced, in order to provide a generic platform for the implementation of future mobile services, supporting standardized interfaces and manufacturer platform independent object and service functionality description. For the functional entity diagram model, it is proposed that the functional entities be designed as objects, the functional entities group as 'open' object oriented SDL platforms, and the interfaces between them as CORBA IDLs, communicating via the ORB in a generic implementation and location independent way. The functional entity groups are proposed to be modeled as SDL block types, while the functional entities and sub-entities as SDL process and service types. The objects interact with each other like client or server objects requesting or receiving services from other objects. Every object has a CORBA IDL interface, which allows every component to be distributed in an optimum way by providing a standardized infrastructure, ensuring interoperability, flexibility, reusability, transparency and management capabilities.
NDC: Analyzing the impact of 3D-stacked memory+logic devices on MapReduce workloads While Processing-in-Memory has been investigated for decades, it has not been embraced commercially. A number of emerging technologies have renewed interest in this topic. In particular, the emergence of 3D stacking and the imminent release of Micron's Hybrid Memory Cube device have made it more practical to move computation near memory. However, the literature is missing a detailed analysis of a killer application that can leverage a Near Data Computing (NDC) architecture. This paper focuses on in-memory MapReduce workloads that are commercially important and are especially suitable for NDC because of their embarrassing parallelism and largely localized memory accesses. The NDC architecture incorporates several simple processing cores on a separate, non-memory die in a 3D-stacked memory package; these cores can perform Map operations with efficient memory access and without hitting the bandwidth wall. This paper describes and evaluates a number of key elements necessary in realizing efficient NDC operation: (i) low-EPI cores, (ii) long daisy chains of memory devices, (iii) the dynamic activation of cores and SerDes links. Compared to a baseline that is heavily optimized for MapReduce execution, the NDC design yields up to 15X reduction in execution time and 18X reduction in system energy.
A Sub- $\mu$ W Reconfigurable Front-End for Invasive Neural Recording That Exploits the Spectral Characteristics of the Wideband Neural Signal This paper presents a sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> ac-coupled reconfigurable front-end for invasive wideband neural signal recording. The proposed topology embeds filtering capabilities enabling the selection of different frequency bands inside the neural signal spectrum. Power consumption is optimized by defining specific noise targets for each sub-band. These targets take into account the spectral characteristics of wideband neural signals: local field potentials (LFP) exhibit <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mathrm {1/f^{x}}$ </tex-math></inline-formula> magnitude scaling while action potentials (AP) show uniform magnitude across frequency. Additionally, noise targets also consider electrode noise and the spectral distribution of noise sources in the circuit. An experimentally verified prototype designed in a standard 180 nm CMOS process draws 815 nW from a 1 V supply. The front-end is able to select among four different frequency bands (modes) up to 5 kHz. The measured input-referred spot-noise at 500 Hz in the LFP mode (1 Hz - 700 Hz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$55~nV/\sqrt {Hz}$ </tex-math></inline-formula> while the integrated noise in the AP mode (200 Hz - 5 kHz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$4.1~\mu Vrms$ </tex-math></inline-formula> . The proposed front-end achieves sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> operation without penalizing other specifications such as input swing, common-mode or power-supply rejection ratios. It reduces the power consumption of neural front-ends with spectral selectivity by <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$6.1\times $ </tex-math></inline-formula> and, compared with conventional wideband front-ends, it obtains a reduction of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.5\times $ </tex-math></inline-formula> .
1.1
0.1
0.1
0.1
0.1
0.066667
0.02
0
0
0
0
0
0
0
Modified Design and Analysis of a CMOS LNA for Wireless Sensor Network Applications This paper focused on the modified design and implementations of Low noise amplifiers targeted for wireless sensor network applications. Key issues in wireless sensor network receivers are discussed and the existing circuit level implementations of Low noise amplifiers are compared. Emphasis was placed on observing device reliability constraints at low power to maximize the life time of the wireless sensor nodes. The proposed Low noise amplifier can operate at 2.4 GHz and dissipates a power of 2 mW from a 1 V supply. It gives 11, 3.9 dB Noise figure at 20 dB gain and IIP3 of 驴2.88 dBm. The noise performance at input, output and the frequency response is presented.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
A 0.5 V 1.1 MS/sec 6.3 fJ/Conversion-Step SAR-ADC With Tri-Level Comparator in 40 nm CMOS This paper presents an extremely low-voltage operation and power efficient successive-approximation-register (SAR) analog-to-digital converter (ADC). Tri-level comparator is proposed to relax the speed requirement of the comparator and decrease the resolution of internal Digital-to-Analog Converter (DAC) by 1-bit. The internal charge redistribution DAC employs unit capacitance of 0.5 fF and ADC operates at nearly thermal noise limitation. To deal with the problem of capacitor mismatch, reconfigurable capacitor array and calibration procedure were developed. The prototype ADC fabricated using 40 nm CMOS process achieves 46.8 dB SNDR and 58.2 dB SFDR with 1.1 MS/sec at 0.5 V power supply. The FoM is 6.3-fJ/conversion step and the chip die area is only 160 μm × 70 μm.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
TILE64 - Processor: A 64-Core SoC with Mesh Interconnect The TILE64TM processor is a multicore SoC targeting the high-performance demands of a wide range of embedded applications across networking and digital multimedia applications. A figure shows a block diagram with 64 tile processors arranged in an 8x8 array. These tiles connect through a scalable 2D mesh network with high-speed I/Os on the periphery. Each general-purpose processor is identical and capable of running SMP Linux.
Dynamic adaptive virtual core mapping to improve power, energy, and performance in multi-socket multicores Consider a multithreaded parallel application running inside a multicore virtual machine context that is itself hosted on a multi-socket multicore physical machine. How should the VMM map virtual cores to physical cores? We compare a local mapping, which compacts virtual cores to processor sockets, and an interleaved mapping, which spreads them over the sockets. Simply choosing between these two mappings exposes clear tradeoffs between performance, energy, and power. We then describe the design, implementation, and evaluation of a system that automatically and dynamically chooses between the two mappings. The system consists of a set of efficient online VMM-based mechanisms and policies that (a) capture the relevant characteristics of memory reference behavior, (b) provide a policy and mechanism for configuring the mapping of virtual machine cores to physical cores that optimizes for power, energy, or performance, and (c) drive dynamic migrations of virtual cores among local physical cores based on the workload and the currently specified objective. Using these techniques we demonstrate that the performance of SPEC and PARSEC benchmarks can be increased by as much as 66%, energy reduced by as much as 31%, and power reduced by as much as 17%, depending on the optimization objective.
Whispers in the Hyper-Space: High-Bandwidth and Reliable Covert Channel Attacks Inside the Cloud Privacy and information security in general are major concerns that impede enterprise adaptation of shared or public cloud computing. Specifically, the concern of virtual machine (VM) physical co-residency stems from the threat that hostile tenants can leverage various forms of side channels (such as cache covert channels) to exfiltrate sensitive information of victims on the same physical system. However, on virtualized x86 systems, covert channel attacks have not yet proven to be practical, and thus the threat is widely considered a “potential risk.” In this paper, we present a novel covert channel attack that is capable of high-bandwidth and reliable data transmission in the cloud. We first study the application of existing cache channel techniques in a virtualized environment and uncover their major insufficiency and difficulties. We then overcome these obstacles by: 1) redesigning a pure timing-based data transmission scheme, and 2) exploiting the memory bus as a high-bandwidth covert channel medium. We further design and implement a robust communication protocol and demonstrate realistic covert channel attacks on various virtualized x86 systems. Our experimental results show that covert channels do pose serious threats to information security in the cloud. Finally, we discuss our insights on covert channel mitigation in virtualized environments.
Cache-Base Application Detection in the Cloud Using Machine Learning. Cross-VM attacks have emerged as a major threat on commercial clouds. These attacks commonly exploit hardware level leakages on shared physical servers. A co-located machine can readily feel the presence of a co-located instance with a heavy computational load through performance degradation due to contention on shared resources. Shared cache architectures such as the last level cache (LLC) have become a popular leakage source to mount cross-VM attack. By exploiting LLC leakages, researchers have already shown that it is possible to recover fine grain information such as cryptographic keys from popular software libraries. This makes it essential to verify implementations that handle sensitive data across the many versions and numerous target platforms, a task too complicated, error prone and costly to be handled by human beings. Here we propose a machine learning based technique to classify applications according to their cache access profiles. We show that with minimal and simple manual processing steps feature vectors can be used to train models using support vector machines to classify the applications with a high degree of success. The profiling and training steps are completely automated and do not require any inspection or study of the code to be classified. In native execution, we achieve a successful classification rate as high as 98% (L1 cache) and 78\\% (LLC) over 40 benchmark applications in the Phoronix suite with mild training. In the cross-VM setting on the noisy Amazon EC2 the success rate drops to 60\\% for a suite of 25 applications. With this initial study we demonstrate that it is possible to train meaningful models to successfully predict applications running in co-located instances.
NetCAT: Practical Cache Attacks from the Network Increased peripheral performance is causing strain on the memory subsystem of modern processors. For example, available DRAM throughput can no longer sustain the traffic of a modern network card. Scrambling to deliver the promised performance, instead of transferring peripheral data to and from DRAM, modern Intel processors perform I/O operations directly on the Last Level Cache (LLC). While Direct Cache Access (DCA) instead of Direct Memory Access (DMA) is a sensible performance optimization, it is unfortunately implemented without care for security, as the LLC is now shared between the CPU and all the attached devices, including the network card.In this paper, we reverse engineer the behavior of DCA, widely referred to as Data-Direct I/O (DDIO), on recent Intel processors and present its first security analysis. Based on our analysis, we present NetCAT, the first Network-based PRIME+PROBE Cache Attack on the processor's LLC of a remote machine. We show that NetCAT not only enables attacks in cooperative settings where an attacker can build a covert channel between a network client and a sandboxed server process (without network), but more worryingly, in general adversarial settings. In such settings, NetCAT can enable disclosure of network timing-based sensitive information. As an example, we show a keystroke timing attack on a victim SSH connection belonging to another client on the target server. Our results should caution processor vendors against unsupervised sharing of (additional) microarchitectural components with peripherals exposed to malicious input.
On introducing noise into the bus-contention channel. We explore two approaches to introducing noise intothe bus-contention channel: an existing approach calledfuzzy time, and a novel approach called probabilistic partitioning. We compare the two approaches in terms of the impact on covert channel capacity, the impact onperformance, the amount of random data needed, andtheir suitability for various applications. For probabilisticpartitioning, we obtain a precise tradeoff betweencovert channel capacity and performance.
Last-Level Cache Side-Channel Attacks are Practical We present an effective implementation of the Prime Probe side-channel attack against the last-level cache. We measure the capacity of the covert channel the attack creates and demonstrate a cross-core, cross-VM attack on multiple versions of GnuPG. Our technique achieves a high attack resolution without relying on weaknesses in the OS or virtual machine monitor or on sharing memory between attacker and victim.
The GPU Computing Era GPU computing is at a tipping point, becoming more widely used in demanding consumer applications and high-performance computing. This article describes the rapid evolution of GPU architectures—from graphics processors to massively parallel many-core multiprocessors, recent developments in GPU computing architectures, and how the enthusiastic adoption of CPU+GPU coprocessing is accelerating parallel applications.
Searching in an unknown environment: an optimal randomized algorithm for the cow-path problem Searching for a goal is a central and extensively studied problem in computer science. In classical searching problems, the cost of a search function is simply the number of queries made to an oracle that knows the position of the goal. In many robotics problems, as well as in problems from other areas, we want to charge a cost proportional to the distance between queries (e.g., the time required to travel between two query points). With this cost function in mind, the abstract problem known as the w -lane cow-path problem was designed. There are known optimal deterministic algorithms for the cow-path problem; we give the first randomized algorithm in this paper. We show that our algorithm is optimal for two paths ( w =2) and give evidence that it is optimal for larger values of w . Subsequent to the preliminary version of this paper, Kao et al. ( in “Proceedings, 5th ACM–SIAM Symposium on Discrete Algorithm,” pp. 372–381, 1994) have shown that our algorithm is indeed optimal for all w ⩾2. Our randomized algorithm gives expected performance that is almost twice as good as is possible with a deterministic algorithm. For the performance of our algorithm, we also derive the asymptotic growth with respect to w —despite similar complexity results for related problems, it appears that this growth has never been analyzed.
Adaptive Synchronization of an Uncertain Complex Dynamical Network This brief paper further investigates the locally and globally adaptive synchronization of an uncertain complex dynamical network. Several network synchronization criteria are deduced. Especially, our hypotheses and designed adaptive controllers for network synchronization are rather simple in form. It is very useful for future practical engineering design. Moreover, numerical simulations are also given to show the effectiveness of our synchronization approaches.
Cache Games -- Bringing Access-Based Cache Attacks on AES to Practice Side channel attacks on cryptographic systems exploit information gained from physical implementations rather than theoretical weaknesses of a scheme. In recent years, major achievements were made for the class of so called access-driven cache attacks. Such attacks exploit the leakage of the memory locations accessed by a victim process. In this paper we consider the AES block cipher and present an attack which is capable of recovering the full secret key in almost real time for AES-128, requiring only a very limited number of observed encryptions. Unlike previous attacks, we do not require any information about the plaintext (such as its distribution, etc.). Moreover, for the first time, we also show how the plaintext can be recovered without having access to the cipher text at all. It is the first working attack on AES implementations using compressed tables. There, no efficient techniques to identify the beginning of AES rounds is known, which is the fundamental assumption underlying previous attacks. We have a fully working implementation of our attack which is able to recover AES keys after observing as little as 100 encryptions. It works against the OpenS SL 0.9.8n implementation of AES on Linux systems. Our spy process does not require any special privileges beyond those of a standard Linux user. A contribution of probably independent interest is a denial of service attack on the task scheduler of current Linux systems (CFS), which allows one to observe (on average) every single memory access of a victim process.
A dynamically reconfigurable system-on-a-chip architecture for future mobile digital signal processing The evolving of current and future broadband access techniques into the wireless domain introduces new and flexible network architectures with difficult and interesting challenges. The system designers are faced with a challenging set of problems that stem from access mechanisms, energy conservation, error rate, transmission speed characteristics of the wireless links and mobility aspects. This paper presents first the major challenges in realizing flexible microelectronic system solutions for digital baseband signal processing in future mobile communication applications. Based thereupon, the architecture design of flexible system-on-a-chip solutions is discussed. The focus of the paper is the introduction of a new parallel and dynamically reconfigurable hardware architecture tailored to this application area. Its performance issues and potential are discussed by the implementation of a flexible and computation-intensive component of future mobile terminals.
Robust compensation of a chattering time-varying input delay We investigate the design of a prediction-based controller for a linear system subject to a time-varying input delay, not necessarily causal. This means that the information feeding the system can be older than ones previously received. We propose to use the current delay value in the prediction employed in the control law. Modeling the input delay as a transport Partial Differential Equation, we prove asymptotic tracking of the system state, providing that the average ℒ2-norm of the delay time-derivative is sufficiently small. This result is obtained by generalizing Halanay inequality to time-varying differential inequalities.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.2
0.2
0.2
0.2
0.2
0.066667
0.004762
0
0
0
0
0
0
0
A 3x9 Gb/s Shared, All-Digital CDR for High-Speed, High-Density I/O. This paper presents a novel all-digital CDR scheme in 90 nm CMOS. Two independently adjustable clock phases are generated from a delay line calibrated to 2 UI. One clock phase is placed in the middle of the eye to recover the data (“data clock”) and the other is swept across the delay line (“search clock”). As the search clock is swept, its samples are compared against the data samples to generate...
0.6–2.7-Gb/s Referenceless Parallel CDR With a Stochastic Dispersion-Tolerant Frequency Acquisition Technique A 0.6-2.7-Gb/s phase-rotator-based four-channel digital clock and data recovery (CDR) IC featuring a low-power dispersion-tolerant referenceless frequency acquisition technique is presented. A quasi-periodic reference clock signal extracted directly from a dispersed input signal is distributed to digitally controlled phase rotators in the CDR ICs for phase acquisition. A multiphase frequency acquisition scheme is employed for the reduction of the clock jitter. The measurement results show that the proposed design offers a lower frequency offset and clock noise floor under channel dispersion, as compared with conventional designs. The proposed four-channel digital CDR IC is fabricated in a 90-nm CMOS process. The figure of merit for a single channel is 8 mW/Gb/s such as a feedforward equalizer, a decision-feedback equalizer, and a referenceless CDR.
A 26–28-Gb/s Full-Rate Clock and Data Recovery Circuit With Embedded Equalizer in 65-nm CMOS This paper presents a power and area efficient approach to embed a continuous-time linear equalizer (CTLE) within a clock and data recovery (CDR) circuit implemented in 65-nm CMOS. The merged equalizer/CDR circuit achieves full-rate operation up to 28 Gb/s while drawing 104 mA from a 1-V supply and occupying 0.33 mm2. Current-mode-logic (CML) circuits with shunt peaking loads using customized differential pair layout are used to maximize circuit bandwidth. To minimize the area penalty, differential stacked spiral inductors (DSSIs) are employed extensively. A novel and practical methodology is introduced for designing DSSIs based on single-layer inductors provided in foundry process design kits (PDK). The DSSI design increases the inductance density by over 3 times and the self-resonance frequency by 20% compared to standard single-layer inductors in the PDK. The measured BER of the recovered data by the CDR is less than 10-12 at 27 Gb/s for 211-1 400 mV PP pseudo-random binary sequence (PRBS) as input data. The measured rms jitter of the recovered clock and data are 1.0 and 2.6 ps, respectively. The CDR is able to lock to inputs ranging from 26 to 28 Gb/s with 29-1 PRBS pattern. Measurement results show that with the equalizer enabled, the CDR can recover a 26-Gb/s 27-1 PRBS data with BER ≤ 10-12 after a channel with 9-dB loss at 13 GHz.
A 400 Mb/s∼2.5 Gb/s Referenceless CDR IC Using Intrinsic Frequency Detection Capability of Half-Rate Linear Phase Detector. A 400 Mb/s ~2.5 Gb/s referenceless clock and data recovery (CDR) IC is presented. This paper shows that the half-rate linear phase detector (PD) has not only phase detection capability but also single-sided frequency detection capability in itself. By using this intrinsic frequency detection capability of the half-rate linear PD, a CDR can be implemented in the single loop architecture without bot...
A 56-Gb/s 8-mW PAM4 CDR/DMUX With High Jitter Tolerance The demand for low-power wireline circuits has motivated extensive work on novel circuit solutions. This article describes a one-eighth-rate clock and data recovery (CDR) circuit and a demultiplexer (DMUX) for processing four-level pulse-amplitude modulation (PAM4) signals in receivers (RXs). Detecting both major and minor data transitions, the proposed architecture can achieve a wider loop bandwidth (BW), suppressing oscillator phase noise and improving the jitter tolerance. Fabricated in 28-nm CMOS technology, the prototype provides a jitter transfer BW of 160 MHz with a tolerance of 1 UI at 10 MHz.
A 140-Mb/s to 1.82-Gb/s continuous-rate embedded clock receiver for flat-panel displays A wide-range fast-locking embedded clock receiver, which can provide a continuous data rate of 140 Mb/s to 1.82 Gb/s in a 0.25-µm CMOS process, is presented. A fast lock time of 7.5 µs and a small root-mean-square jitter of 15 ps are achieved by using the proposed frequency-band selection and frequency acquisition schemes, as well as a simple linear-phase detector. The implemented embedded clock receiver occupies 2.00 mm2 and consumes currents of 44 and 137 mA at 140 Mb/s and 1.82 Gb/s, respectively, including input/output currents.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Measuring the Gap Between FPGAs and ASICs ABSTRACT This paper presents experimental measurements of the differences between a 90nm CMOS FPGA and 90nm CMOS Standard Cell ASICs in terms of logic density, circuit speed and power consumption. We are motivated to make these measurements to enable system designers to make better informed choices between these two media and to give insight to FPGA makers on the deciencies to attack and thereby improve FPGAs. In the paper, we describe the methodology by which the measurements were obtained and we show that, for circuits containing only combinational logic and,ipops, the ratio of silicon area required to implement them in FPGAs and ASICs is on average 40. Modern FPGAs also contain \hard" blocks such as multiplier/accumulators and block memories,and we nd,that these blocks reduce this average area gap signican tly to as little as 21. The ratio of critical path delay, from FPGA to ASIC, is roughly 3 to 4, with less inuence from block memory and hard multipliers. The dynamic power consumption ratio is approximately 12 times and, with hard blocks, this gap generally becomes smaller. Categories and Subject Descriptors
Termination detection for diffusing computations
Distributed multi-agent optimization with state-dependent communication We study distributed algorithms for solving global optimization problems in which the objective function is the sum of local objective functions of agents and the constraint set is given by the intersection of local constraint sets of agents. We assume that each agent knows only his own local objective function and constraint set, and exchanges information with the other agents over a randomly varying network topology to update his information state. We assume a state-dependent communication model over this topology: communication is Markovian with respect to the states of the agents and the probability with which the links are available depends on the states of the agents. We study a projected multi-agent subgradient algorithm under state-dependent communication. The state-dependence of the communication introduces significant challenges and couples the study of information exchange with the analysis of subgradient steps and projection errors. We first show that the multi-agent subgradient algorithm when used with a constant stepsize may result in the agent estimates to diverge with probability one. Under some assumptions on the stepsize sequence, we provide convergence rate bounds on a “disagreement metric” between the agent estimates. Our bounds are time-nonhomogeneous in the sense that they depend on the initial starting time. Despite this, we show that agent estimates reach an almost sure consensus and converge to the same optimal solution of the global optimization problem with probability one under different assumptions on the local constraint sets and the stepsize sequence.
Practical Mitigations for Timing-Based Side-Channel Attacks on Modern x86 Processors This paper studies and evaluates the extent to which automated compiler techniques can defend against timing-based side-channel attacks on modern x86 processors. We study how modern x86 processors can leak timing information through side-channels that relate to control flow and data flow. To eliminate key-dependent control flow and key-dependent timing behavior related to control flow, we propose the use of if-conversion in a compiler backend, and evaluate a proof-of-concept prototype implementation. Furthermore, we demonstrate two ways in which programs that lack key-dependent control flow and key-dependent cache behavior can still leak timing information on modern x86 implementations such as the Intel Core 2 Duo, and propose defense mechanisms against them.
A Dht-Based Discovery Service For The Internet Of Things Current trends towards the Future Internet are envisaging the conception of novel services endowed with context-aware and autonomic capabilities to improve end users' quality of life. The Internet of Things paradigm is expected to contribute towards this ambitious vision by proposing models and mechanisms enabling the creation of networks of "smart things" on a large scale. It is widely recognized that efficient mechanisms for discovering available resources and capabilities are required to realize such vision. The contribution of this work consists in a novel discovery service for the Internet of Things. The proposed solution adopts a peer-to-peer approach for guaranteeing scalability, robustness, and easy maintenance of the overall system. While most existing peer-to-peer discovery services proposed for the IoT support solely exact match queries on a single attribute (i. e., the object identifier), our solution can handle multiattribute and range queries. We defined a layered approach by distinguishing three main aspects: multiattribute indexing, range query support, peer-to-peer routing. We chose to adopt an over-DHT indexing scheme to guarantee ease of design and implementation principles. We report on the implementation of a Proof of Concept in a dangerous goods monitoring scenario, and, finally, we discuss test results for structural properties and query performance evaluation.
P2P-Based Service Distribution over Distributed Resources Dynamic or demand-driven service deployment in a Grid or Cloud environment is an important issue considering the varying nature of demand. Most distributed frameworks either offer static service deployment which results in resource allocation problems, or, are job-based where for each invocation, the job along with the data has to be transferred for remote execution resulting in increased communication cost. An alternative approach is dynamic demand-driven provisioning of services as proposed in earlier literature, but the proposed methods fail to account for the volatility of resources in a Grid environment. In this paper, we propose a unique peer-to-peer based approach for dynamic service provisioning which incorporates a Bit-Torrent like protocol for provisioning the service on a remote node. Being built around a P2P model, the proposed framework caters to resource volatility and also incurs lower provisioning cost.
A 1V 3.5 μW Bio-AFE With Chopper-Capacitor-Chopper Integrator-Based DSL and Low Power GM-C Filter This brief presents a low-noise, low-power bio-signal acquisition analog front-end (Bio-AFE). It mainly includes a capacitively coupled chopper-stabilized instrumentation amplifier (CCIA), a programmable gain amplifier (PGA), a low-pass filter (LPF), and a successive approximation analog to digital converter (SAR ADC). A chopper-capacitor-chopper integrator based DC servo loop (C3IB-DSL...
1.2
0.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
0
3-D Technology Assessment: Path-Finding the Technology/Design Sweet-Spot It is widely acknowledged that three-dimensional (3-D) technologies offer numerous opportunities for system design. In recent years, significant progress has been made on these 3-D technologies, and they have become probably the best hope for carrying the semiconductor industry beyond the path of Moore&#39;s law. However, a clear roadmap is missing to successfully introduce this 3-D technology onto th...
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
A 0.5 V 1.1 MS/sec 6.3 fJ/Conversion-Step SAR-ADC With Tri-Level Comparator in 40 nm CMOS This paper presents an extremely low-voltage operation and power efficient successive-approximation-register (SAR) analog-to-digital converter (ADC). Tri-level comparator is proposed to relax the speed requirement of the comparator and decrease the resolution of internal Digital-to-Analog Converter (DAC) by 1-bit. The internal charge redistribution DAC employs unit capacitance of 0.5 fF and ADC operates at nearly thermal noise limitation. To deal with the problem of capacitor mismatch, reconfigurable capacitor array and calibration procedure were developed. The prototype ADC fabricated using 40 nm CMOS process achieves 46.8 dB SNDR and 58.2 dB SFDR with 1.1 MS/sec at 0.5 V power supply. The FoM is 6.3-fJ/conversion step and the chip die area is only 160 μm × 70 μm.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A 1.6Gbps Digital Clock and Data Recovery Circuit A digital clock and data recovery circuit employs simple 3-level digital-to-analog converters to interface the digital loop filter to the voltage controlled oscillator and achieves low jit- ter performance. Test chip fabricated in a 0.13µm CMOS process achieves BER < 10−12, ±1500ppm lock-in range, ±2500ppm tracking range, recovered clock jitter of 8.9ps rms and consumes 12mW power from a single-pin 1.2V supply, while operating at 1.6Gbps. I. INTRODUCTION The ever increasing demand for large off-chip I/O band- width requires integration of many serial links on a large digital chip. These serial links need to be low-power, easily portable to different process technologies, and should operate reliably in noisy environments. A clock and data recovery (CDR) circuit is an integral component of these serial links and is the focus of this paper. Modern CDRs are commonly imple- mented using analog phase-locked loops (PLLs) as shown in Fig. 1 (1). A bang-bang phase detector (!!PD) determines sign
Predicting Data-Dependent Jitter An analysis for calculating data-dependent jitter (DDJ) in a first-order system is introduced. The predicted DDJ features unique threshold crossing times with self-similar geometry. An approximation for DDJ in second-order systems is described in terms of the damping factor and natural frequency. Higher order responses demonstrate conditions under which unique threshold crossing times do not exist...
A 3.0 Gb/s clock data recovery circuits based on digital DLL for clock-embedded display interface.
Design Techniques for a 66 Gb/s 46 mW 3-Tap Decision Feedback Equalizer in 65 nm CMOS. This paper analyzes and describes design techniques enabling energy-efficient multi-tap decision feedback equalizers operated at or near the speed limits of the technology. We propose a closed-loop architecture utilizing three techniques to achieve this goal, namely a merged latch and summer, reduced latch gain, and a dynamic latch design. A 65 nm CMOS 3-tap implementation of the proposed architec...
A 2.0 Gb/s Clock-Embedded Interface for Full-HD 10-Bit 120 Hz LCD Drivers With 1/5-Rate Noise-Tolerant Phase and Frequency Recovery A 2.0 Gb/s clock-embedded interface for LCD drivers, Advanced-PPmL¿, has been developed for high-speed data transfer and reduced area in transmission media. Only one pair of differential signals is needed to control the LCD driver and to display images. A newly developed 1/5-rate phase frequency detector helps achieve a 25% power reduction compared with a half-rate architecture. Pulse filtering of phase control signals and a 4B5B-based interface protocol have been developed for noise-tolerant clock recovery. Power consumption in the clock and data recovery (CDR) is 93 mW with a 3.0 V supply. The rms jitter in the recovered clock is 11 ps when a PRBS7 pattern is used.
Replica compensated linear regulators for supply-regulated phase-locked loops Supply-regulated phase-locked loops rely upon the VCO voltage regulator to maintain a low sensitivity to supply noise and hence low overall jitter. By analyzing regulator supply rejection, we show that in order to simultaneously meet the bandwidth and low dropout requirements, previous regulator implementations used in supply-regulated PLLs suffer from unfavorable tradeoffs between power supply rejection and power consumption. We therefore propose a compensation technique that places the regulator's amplifier in a local replica feedback loop, stabilizing the regulator by increasing the amplifier bandwidth while lowering its gain. Even though the forward gain of the amplifier is reduced, supply noise affects the replica output in addition to the actual output, and therefore the amplifier's gain to reject supply noise is effectively restored. Analysis shows that for reasonable mismatch between the replica and actual loads, regulator performance is uncompromised, and experimental results from a 90 nm SOI test chip confirm that with the same power consumption, the proposed regulator achieves at least 4 dB higher supply rejection than the previous regulator design. Furthermore, simulations show that if not for other supply rejection-limiting components in the PLL, the supply rejection improvement of the proposed regulator is greater than 15 dB.
A 0.5–16.3 Gb/s Fully Adaptive Flexible-Reach Transceiver for FPGA in 20 nm CMOS This paper describes a 0.5–16.3 Gb/s fully adaptive wireline transceiver embedded in 20 nm CMOS FPGA. The receiver utilizes bandwidth adjustable CTLE and adjustable output capacitance at the AGC to support wide range of channel loss profiles. A modified 11-tap, 1 bit speculative DFE topology provides reliable operation across all data rates. Low-latency digital CDR ensures high tracking bandwidth while still providing flexibility to support multiple protocols. The transceiver uses ring-oscillator with programmable main and cross-coupled inverter drive-strengths to wide range of operating frequency for low data-rate operation. A wide range low jitter LC-PLL utilizes feedback divider with synchronized CMOS down-counter without a prescaler to achieve a continuous divide ratio of 16-257. The clock distribution uses quadrature-error correction circuit to improve phase interpolator linearity. The transceiver achieves BER 10 over a 28 dB loss backplane at 16.3 Gb/s and over legacy channels with 10 G-KR characteristics at 10.3125 Gb/s. The transceiver meets jitter tolerance specifications for both PCIe Gen3 at 8 Gb/s and PCIe Gen4 at 16 Gb/s in both common-clock and spread-spectrum modes.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
A domain-specific architecture for deep neural networks. Tensor processing units improve performance per watt of neural networks in Google datacenters by roughly 50x.
Directed diffusion for wireless sensor networking Advances in processor, memory, and radio technology will enable small and cheap nodes capable of sensing, communication, and computation. Networks of such nodes can coordinate to perform distributed sensing of environmental phenomena. In this paper, we explore the directed-diffusion paradigm for such coordination. Directed diffusion is data-centric in that all communication is for named data. All nodes in a directed-diffusion-based network are application aware. This enables diffusion to achieve energy savings by selecting empirically good paths and by caching and processing data in-network (e.g., data aggregation). We explore and evaluate the use of directed diffusion for a simple remote-surveillance sensor network analytically and experimentally. Our evaluation indicates that directed diffusion can achieve significant energy savings and can outperform idealized traditional schemes (e.g., omniscient multicast) under the investigated scenarios.
Software radio architecture: a mathematical perspective As the software radio makes its transition from research to practice, it becomes increasingly important to establish provable properties of the software radio architecture on which product developers and service providers can base technology insertion decisions. Establishing provable properties requires a mathematical perspective on the software radio architecture. This paper contributes to that perspective by critically reviewing the fundamental concept of the software radio, using mathematical models to characterize this rapidly emerging technology in the context of similar technologies like programmable digital radios. The software radio delivers dynamically defined services through programmable processing capacity that has the mathematical structure of the Turing machine. The bounded recursive functions, a subset of the total recursive functions, are shown to be the largest class of Turing-computable functions for which software radios exhibit provable stability in plug-and-play scenarios. Understanding the topological properties of the software radio architecture promotes plug-and-play applications and cost-effective reuse. Analysis of these topological properties yields a layered distributed virtual machine reference model and a set of architecture design principles for the software radio. These criteria may be useful in defining interfaces among hardware, middleware, and higher level software components that are needed for cost-effective software reuse
Fully Integrated CMOS Power Amplifier With Efficiency Enhancement at Power Back-Off This paper presents a new approach for power amplifier design using deep submicron CMOS technologies. A transformer based voltage combiner is proposed to combine power generated from several low-voltage CMOS amplifiers. Unlike other voltage combining transformers, the architecture presented in this paper provides greater flexibility to access and control the individual amplifiers in a voltage comb...
A Highly Adaptive Leader Election Algorithm for Mobile Ad Hoc Networks.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.1175
0.111
0.111
0.046222
0.02835
0.015582
0.0024
0
0
0
0
0
0
0
Practical Near-Data Processing for In-Memory Analytics Frameworks. The end of Dennard scaling has made all systemsenergy-constrained. For data-intensive applications with limitedtemporal locality, the major energy bottleneck is data movementbetween processor chips and main memory modules. For such workloads, the best way to optimize energy is to place processing near the datain main memory. Advances in 3D integrationprovide an opportunity to implement near-data processing (NDP) withoutthe technology problems that similar efforts had in the past. This paper develops the hardware and software of an NDP architecturefor in-memory analytics frameworks, including MapReduce, graphprocessing, and deep neural networks. We develop simple but scalablehardware support for coherence, communication, and synchronization, anda runtime system that is sufficient to support analytics frameworks withcomplex data patterns while hiding all thedetails of the NDP hardware. Our NDP architecture provides up to 16x performance and energy advantageover conventional approaches, and 2.5x over recently-proposed NDP systems. We also investigate the balance between processing and memory throughput, as well as the scalability and physical and logical organization of the memory system. Finally, we show that it is critical to optimize software frameworksfor spatial locality as it leads to 2.9x efficiency improvements for NDP.
Adaptive Scheduling for Systems with Asymmetric Memory Hierarchies. Conventional multicores rely on deep cache hierarchies to reduce data movement. Recent advances in die stacking have enabled near-data processing (NDP) systems that reduce data movement by placing cores close to memory. NDP cores enjoy cheaper memory accesses and are more area-constrained, so they use shallow cache hierarchies instead. Since neither shallow nor deep hierarchies work well for all applications, prior work has proposed systems that incorporate both. These asymmetric memory hierarchies can be highly beneficial, but they require scheduling computation to the right hierarchy. We present AMS, an adaptive scheduler that automatically finds high-quality thread-to-hierarchy mappings. AMS monitors threads, accurately models their performance under different hierarchies and core types, and adapts algorithms first proposed for cache partitioning to produce high-quality schedules. AMS is cheap enough to use online, so it adapts to program phases, and performs within 1% of an exhaustive-search scheduler. As a result, AMS outperforms asymmetry-oblivious schedulers by up to 37% and by 18% on average.
Friends and neighbors on the Web The Internet has become a rich and large repository of information about us as individuals. Anything from the links and text on a user’s homepage to the mailing lists the user subscribes to are reflections of social interactions a user has in the real world. In this paper we devise techniques and tools to mine this information in order to extract social networks and the exogenous factors underlying the networks’ structure. In an analysis of two data sets, from Stanford University and the Massachusetts Institute of Technology (MIT), we show that some factors are better indicators of social connections than others, and that these indicators vary between user populations. Our techniques provide potential applications in automatically inferring real world connections and discovering, labeling, and characterizing communities.
Effect of Job Size Characteristics on Job Scheduling Performance A workload characteristic on a parallel computer depends on an administration policy or a user community for the computer system. An administrator of a parallel computer system needs to select an appropriate scheduling algorithm that schedules multiple jobs on the computer system efficiently. The goal of the work presented in this paper is to investigate mechanisms how job size characteristics affect job scheduling performance. For this goal, this paper evaluates the performance of job scheduling algorithms under various workload models, each of which has a certain characteristic related to the number of processors requested by a job, and analyzes the mechanism for job size characteristics that affect job scheduling performance significantly in the evaluation. The results showed that: (1) most scheduling algorithms classified into the first-fit scheduling showed best performance and were not affected by job size characteristics, (2) certain job size characteristics affected performance of priority scheduling significantly. The analysis of the results showed that the LJF algorithm, which dispatched the largest job first, would perfectly pack jobs to idle processors at high load, where all jobs requested power-of-two processors and the number of processors on a parallel computer was power-of-two.
ComputeDRAM: In-Memory Compute Using Off-the-Shelf DRAMs In-memory computing has long been promised as a solution to the "Memory Wall" problem. Recent work has proposed using chargesharing on the bit-lines of a memory in order to compute in-place and with massive parallelism, all without having to move data across the memory bus. Unfortunately, prior work has required modification to RAM designs (e.g. adding multiple row decoders) in order to open multiple rows simultaneously. So far, the competitive and low-margin nature of the DRAM industry has made commercial DRAM manufacturers resist adding any additional logic into DRAM. This paper addresses the need for in-memory computation with little to no change to DRAM designs. It is the first work to demonstrate in-memory computation with off-the-shelf, unmodified, commercial, DRAM. This is accomplished by violating the nominal timing specification and activating multiple rows in rapid succession, which happens to leave multiple rows open simultaneously, thereby enabling bit-line charge sharing. We use a constraint-violating command sequence to implement and demonstrate row copy, logical OR, and logical AND in unmodified, commodity, DRAM. Subsequently, we employ these primitives to develop an architecture for arbitrary, massively-parallel, computation. Utilizing a customized DRAM controller in an FPGA and commodity DRAM modules, we characterize this opportunity in hardware for all major DRAM vendors. This work stands as a proof of concept that in-memory computation is possible with unmodified DRAM modules and that there exists a financially feasible way for DRAM manufacturers to support in-memory compute.
GP-SIMD Processing-in-Memory GP-SIMD, a novel hybrid general-purpose SIMD computer architecture, resolves the issue of data synchronization by in-memory computing through combining data storage and massively parallel processing. GP-SIMD employs a two-dimensional access memory with modified SRAM storage cells and a bit-serial processing unit per each memory row. An analytic performance model of the GP-SIMD architecture is presented, comparing it to associative processor and to conventional SIMD architectures. Cycle-accurate simulation of four workloads supports the analytical comparison. Assuming a moderate die area, GP-SIMD architecture outperforms both the associative processor and conventional SIMD coprocessor architectures by almost an order of magnitude while consuming less power.
Scheduling Techniques for GPU Architectures with Processing-In-Memory Capabilities. Processing data in or near memory (PIM), as opposed to in conventional computational units in a processor, can greatly alleviate the performance and energy penalties of data transfers from/to main memory. Graphics Processing Unit (GPU) architectures and applications, where main memory bandwidth is a critical bottleneck, can benefit from the use of PIM. To this end, an application should be properly partitioned and scheduled to execute on either the main, powerful GPU cores that are far away from memory or the auxiliary, simple GPU cores that are close to memory (e.g., in the logic layer of 3D-stacked DRAM). This paper investigates two key code scheduling issues in such a GPU architecture that has PIM capabilities, to maximize performance and energy-efficiency: (1) how to automatically identify the code segments, or kernels, to be offloaded to the cores in memory, and (2) how to concurrently schedule multiple kernels on the main GPU cores and the auxiliary GPU cores in memory. We develop two new runtime techniques: (1) a regression-based affinity prediction model and mechanism that accurately identifies which kernels would benefit from PIM and offloads them to GPU cores in memory, and (2) a concurrent kernel management mechanism that uses the affinity prediction model, a new kernel execution time prediction model, and kernel dependency information to decide which kernels to schedule concurrently on main GPU cores and the GPU cores in memory. Our experimental evaluations across 25 GPU applications demonstrate that these two techniques can significantly improve both application performance (by 25% and 42%, respectively, on average) and energy efficiency (by 28% and 27%).
Near-Data Processing: Insights from a MICRO-46 Workshop. The cost of data movement in big-data systems motivates careful examination of near-data processing (NDP) frameworks. The concept of NDP was actively researched in the 1990s, but gained little commercial traction. After a decade-long dormancy, interest in this topic has spiked. A workshop on NDP was organized at MICRO-46 and was well attended. Given the interest, the organizers and keynote speaker...
Deep learning Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech. Machine-learning technology powers many aspects of modern society: from web searches to content filtering on social networks to recommendations on e-commerce websites, and it is increasingly present in consumer products such as cameras and smartphones. Machine-learning systems are used to identify objects in images, transcribe speech into text, match news items, posts or products with users' interests, and select relevant results of search. Increasingly, these applications make use of a class of techniques called deep learning. Conventional machine-learning techniques were limited in their ability to process natural data in their raw form. For decades, constructing a pattern-recognition or machine-learning system required careful engineering and considerable domain expertise to design a feature extractor that transformed the raw data (such as the pixel values of an image) into a suitable internal representation or feature vector from which the learning subsystem, often a classifier, could detect or classify patterns in the input. Representation learning is a set of methods that allows a machine to be fed with raw data and to automatically discover the representations needed for detection or classification. Deep-learning methods are representation-learning methods with multiple levels of representation, obtained by composing simple but non-linear modules that each transform the representation at one level (starting with the raw input) into a representation at a higher, slightly more abstract level. With the composition of enough such transformations, very complex functions can be learned. For classification tasks, higher layers of representation amplify aspects of the input that are important for discrimination and suppress irrelevant variations. An image, for example, comes in the form of an array of pixel values, and the learned features in the first layer of representation typically represent the presence or absence of edges at particular orientations and locations in the image. The second layer typically detects motifs by spotting particular arrangements of edges, regardless of small variations in the edge positions. The third layer may assemble motifs into larger combinations that correspond to parts of familiar objects, and subsequent layers would detect objects as combinations of these parts. The key aspect of deep learning is that these layers of features are not designed by human engineers: they are learned from data using a general-purpose learning procedure. Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years. It has turned out to be very good at discovering intricate structures in high-dimensional data and is therefore applicable to many domains of science, business and government. In addition to beating records in image recognition1, 2, 3, 4 and speech recognition5, 6, 7, it has beaten other machine-learning techniques at predicting the activity of potential drug molecules8, analysing particle accelerator data9, 10, reconstructing brain circuits11, and predicting the effects of mutations in non-coding DNA on gene expression and disease12, 13. Perhaps more surprisingly, deep learning has produced extremely promising results for various tasks in natural language understanding14, particularly topic classification, sentiment analysis, question answering15 and language translation16, 17. We think that deep learning will have many more successes in the near future because it requires very little engineering by hand, so it can easily take advantage of increases in the amount of available computation and data. New learning algorithms and architectures that are currently being developed for deep neural networks will only accelerate this progress. The most common form of machine learning, deep or not, is supervised learning. Imagine that we want to build a system that can classify images as containing, say, a house, a car, a person or a pet. We first collect a large data set of images of houses, cars, people and pets, each labelled with its category. During training, the machine is shown an image and produces an output in the form of a vector of scores, one for each category. We want the desired category to have the highest score of all categories, but this is unlikely to happen before training. We compute an objective function that measures the error (or distance) between the output scores and the desired pattern of scores. The machine then modifies its internal adjustable parameters to reduce this error. These adjustable parameters, often called weights, are real numbers that can be seen as 'knobs' that define the input–output function of the machine. In a typical deep-learning system, there may be hundreds of millions of these adjustable weights, and hundreds of millions of labelled examples with which to train the machine. To properly adjust the weight vector, the learning algorithm computes a gradient vector that, for each weight, indicates by what amount the error would increase or decrease if the weight were increased by a tiny amount. The weight vector is then adjusted in the opposite direction to the gradient vector. The objective function, averaged over all the training examples, can be seen as a kind of hilly landscape in the high-dimensional space of weight values. The negative gradient vector indicates the direction of steepest descent in this landscape, taking it closer to a minimum, where the output error is low on average. In practice, most practitioners use a procedure called stochastic gradient descent (SGD). This consists of showing the input vector for a few examples, computing the outputs and the errors, computing the average gradient for those examples, and adjusting the weights accordingly. The process is repeated for many small sets of examples from the training set until the average of the objective function stops decreasing. It is called stochastic because each small set of examples gives a noisy estimate of the average gradient over all examples. This simple procedure usually finds a good set of weights surprisingly quickly when compared with far more elaborate optimization techniques18. After training, the performance of the system is measured on a different set of examples called a test set. This serves to test the generalization ability of the machine — its ability to produce sensible answers on new inputs that it has never seen during training. Many of the current practical applications of machine learning use linear classifiers on top of hand-engineered features. A two-class linear classifier computes a weighted sum of the feature vector components. If the weighted sum is above a threshold, the input is classified as belonging to a particular category. Since the 1960s we have known that linear classifiers can only carve their input space into very simple regions, namely half-spaces separated by a hyperplane19. But problems such as image and speech recognition require the input–output function to be insensitive to irrelevant variations of the input, such as variations in position, orientation or illumination of an object, or variations in the pitch or accent of speech, while being very sensitive to particular minute variations (for example, the difference between a white wolf and a breed of wolf-like white dog called a Samoyed). At the pixel level, images of two Samoyeds in different poses and in different environments may be very different from each other, whereas two images of a Samoyed and a wolf in the same position and on similar backgrounds may be very similar to each other. A linear classifier, or any other 'shallow' classifier operating on raw pixels could not possibly distinguish the latter two, while putting the former two in the same category. This is why shallow classifiers require a good feature extractor that solves the selectivity–invariance dilemma — one that produces representations that are selective to the aspects of the image that are important for discrimination, but that are invariant to irrelevant aspects such as the pose of the animal. To make classifiers more powerful, one can use generic non-linear features, as with kernel methods20, but generic features such as those arising with the Gaussian kernel do not allow the learner to generalize well far from the training examples21. The conventional option is to hand design good feature extractors, which requires a considerable amount of engineering skill and domain expertise. But this can all be avoided if good features can be learned automatically using a general-purpose learning procedure. This is the key advantage of deep learning. A deep-learning architecture is a multilayer stack of simple modules, all (or most) of which are subject to learning, and many of which compute non-linear input–output mappings. Each module in the stack transforms its input to increase both the selectivity and the invariance of the representation. With multiple non-linear layers, say a depth of 5 to 20, a system can implement extremely intricate functions of its inputs that are simultaneously sensitive to minute details — distinguishing Samoyeds from white wolves — and insensitive to large irrelevant variations such as the background, pose, lighting and surrounding objects. From the earliest days of pattern recognition22, 23, the aim of researchers has been to replace hand-engineered features with trainable multilayer networks, but despite its simplicity, the solution was not widely understood until the mid 1980s. As it turns out, multilayer architectures can be trained by simple stochastic gradient descent. As long as the modules are relatively smooth functions of their inputs and of their internal weights, one can compute gradients using the backpropagation procedure. The idea that this could be done, and that it worked, was discovered independently by several different groups during the 1970s and 1980s24, 25, 26, 27. The backpropagation procedure to compute the gradient of an objective function with respect to the weights of a multilayer stack of modules is nothing more than a practical application of the chain rule for derivatives. The key insight is that the derivative (or gradient) of the objective with respect to the input of a module can be computed by working backwards from the gradient with respect to the output of that module (or the input of the subsequent module) (Fig. 1). The backpropagation equation can be applied repeatedly to propagate gradients through all modules, starting from the output at the top (where the network produces its prediction) all the way to the bottom (where the external input is fed). Once these gradients have been computed, it is straightforward to compute the gradients with respect to the weights of each module. Many applications of deep learning use feedforward neural network architectures (Fig. 1), which learn to map a fixed-size input (for example, an image) to a fixed-size output (for example, a probability for each of several categories). To go from one layer to the next, a set of units compute a weighted sum of their inputs from the previous layer and pass the result through a non-linear function. At present, the most popular non-linear function is the rectified linear unit (ReLU), which is simply the half-wave rectifier f(z) = max(z, 0). In past decades, neural nets used smoother non-linearities, such as tanh(z) or 1/(1 + exp(−z)), but the ReLU typically learns much faster in networks with many layers, allowing training of a deep supervised network without unsupervised pre-training28. Units that are not in the input or output layer are conventionally called hidden units. The hidden layers can be seen as distorting the input in a non-linear way so that categories become linearly separable by the last layer (Fig. 1). In the late 1990s, neural nets and backpropagation were largely forsaken by the machine-learning community and ignored by the computer-vision and speech-recognition communities. It was widely thought that learning useful, multistage, feature extractors with little prior knowledge was infeasible. In particular, it was commonly thought that simple gradient descent would get trapped in poor local minima — weight configurations for which no small change would reduce the average error. In practice, poor local minima are rarely a problem with large networks. Regardless of the initial conditions, the system nearly always reaches solutions of very similar quality. Recent theoretical and empirical results strongly suggest that local minima are not a serious issue in general. Instead, the landscape is packed with a combinatorially large number of saddle points where the gradient is zero, and the surface curves up in most dimensions and curves down in the remainder29, 30. The analysis seems to show that saddle points with only a few downward curving directions are present in very large numbers, but almost all of them have very similar values of the objective function. Hence, it does not much matter which of these saddle points the algorithm gets stuck at. Interest in deep feedforward networks was revived around 2006 (refs 31,32,33,34) by a group of researchers brought together by the Canadian Institute for Advanced Research (CIFAR). The researchers introduced unsupervised learning procedures that could create layers of feature detectors without requiring labelled data. The objective in learning each layer of feature detectors was to be able to reconstruct or model the activities of feature detectors (or raw inputs) in the layer below. By 'pre-training' several layers of progressively more complex feature detectors using this reconstruction objective, the weights of a deep network could be initialized to sensible values. A final layer of output units could then be added to the top of the network and the whole deep system could be fine-tuned using standard backpropagation33, 34, 35. This worked remarkably well for recognizing handwritten digits or for detecting pedestrians, especially when the amount of labelled data was very limited36. The first major application of this pre-training approach was in speech recognition, and it was made possible by the advent of fast graphics processing units (GPUs) that were convenient to program37 and allowed researchers to train networks 10 or 20 times faster. In 2009, the approach was used to map short temporal windows of coefficients extracted from a sound wave to a set of probabilities for the various fragments of speech that might be represented by the frame in the centre of the window. It achieved record-breaking results on a standard speech recognition benchmark that used a small vocabulary38 and was quickly developed to give record-breaking results on a large vocabulary task39. By 2012, versions of the deep net from 2009 were being developed by many of the major speech groups6 and were already being deployed in Android phones. For smaller data sets, unsupervised pre-training helps to prevent overfitting40, leading to significantly better generalization when the number of labelled examples is small, or in a transfer setting where we have lots of examples for some 'source' tasks but very few for some 'target' tasks. Once deep learning had been rehabilitated, it turned out that the pre-training stage was only needed for small data sets. There was, however, one particular type of deep, feedforward network that was much easier to train and generalized much better than networks with full connectivity between adjacent layers. This was the convolutional neural network (ConvNet)41, 42. It achieved many practical successes during the period when neural networks were out of favour and it has recently been widely adopted by the computer-vision community. ConvNets are designed to process data that come in the form of multiple arrays, for example a colour image composed of three 2D arrays containing pixel intensities in the three colour channels. Many data modalities are in the form of multiple arrays: 1D for signals and sequences, including language; 2D for images or audio spectrograms; and 3D for video or volumetric images. There are four key ideas behind ConvNets that take advantage of the properties of natural signals: local connections, shared weights, pooling and the use of many layers. The architecture of a typical ConvNet (Fig. 2) is structured as a series of stages. The first few stages are composed of two types of layers: convolutional layers and pooling layers. Units in a convolutional layer are organized in feature maps, within which each unit is connected to local patches in the feature maps of the previous layer through a set of weights called a filter bank. The result of this local weighted sum is then passed through a non-linearity such as a ReLU. All units in a feature map share the same filter bank. Different feature maps in a layer use different filter banks. The reason for this architecture is twofold. First, in array data such as images, local groups of values are often highly correlated, forming distinctive local motifs that are easily detected. Second, the local statistics of images and other signals are invariant to location. In other words, if a motif can appear in one part of the image, it could appear anywhere, hence the idea of units at different locations sharing the same weights and detecting the same pattern in different parts of the array. Mathematically, the filtering operation performed by a feature map is a discrete convolution, hence the name. Although the role of the convolutional layer is to detect local conjunctions of features from the previous layer, the role of the pooling layer is to merge semantically similar features into one. Because the relative positions of the features forming a motif can vary somewhat, reliably detecting the motif can be done by coarse-graining the position of each feature. A typical pooling unit computes the maximum of a local patch of units in one feature map (or in a few feature maps). Neighbouring pooling units take input from patches that are shifted by more than one row or column, thereby reducing the dimension of the representation and creating an invariance to small shifts and distortions. Two or three stages of convolution, non-linearity and pooling are stacked, followed by more convolutional and fully-connected layers. Backpropagating gradients through a ConvNet is as simple as through a regular deep network, allowing all the weights in all the filter banks to be trained. Deep neural networks exploit the property that many natural signals are compositional hierarchies, in which higher-level features are obtained by composing lower-level ones. In images, local combinations of edges form motifs, motifs assemble into parts, and parts form objects. Similar hierarchies exist in speech and text from sounds to phones, phonemes, syllables, words and sentences. The pooling allows representations to vary very little when elements in the previous layer vary in position and appearance. The convolutional and pooling layers in ConvNets are directly inspired by the classic notions of simple cells and complex cells in visual neuroscience43, and the overall architecture is reminiscent of the LGN–V1–V2–V4–IT hierarchy in the visual cortex ventral pathway44. When ConvNet models and monkeys are shown the same picture, the activations of high-level units in the ConvNet explains half of the variance of random sets of 160 neurons in the monkey's inferotemporal cortex45. ConvNets have their roots in the neocognitron46, the architecture of which was somewhat similar, but did not have an end-to-end supervised-learning algorithm such as backpropagation. A primitive 1D ConvNet called a time-delay neural net was used for the recognition of phonemes and simple words47, 48. There have been numerous applications of convolutional networks going back to the early 1990s, starting with time-delay neural networks for speech recognition47 and document reading42. The document reading system used a ConvNet trained jointly with a probabilistic model that implemented language constraints. By the late 1990s this system was reading over 10% of all the cheques in the United States. A number of ConvNet-based optical character recognition and handwriting recognition systems were later deployed by Microsoft49. ConvNets were also experimented with in the early 1990s for object detection in natural images, including faces and hands50, 51, and for face recognition52. Since the early 2000s, ConvNets have been applied with great success to the detection, segmentation and recognition of objects and regions in images. These were all tasks in which labelled data was relatively abundant, such as traffic sign recognition53, the segmentation of biological images54 particularly for connectomics55, and the detection of faces, text, pedestrians and human bodies in natural images36, 50, 51, 56, 57, 58. A major recent practical success of ConvNets is face recognition59. Importantly, images can be labelled at the pixel level, which will have applications in technology, including autonomous mobile robots and self-driving cars60, 61. Companies such as Mobileye and NVIDIA are using such ConvNet-based methods in their upcoming vision systems for cars. Other applications gaining importance involve natural language understanding14 and speech recognition7. Despite these successes, ConvNets were largely forsaken by the mainstream computer-vision and machine-learning communities until the ImageNet competition in 2012. When deep convolutional networks were applied to a data set of about a million images from the web that contained 1,000 different classes, they achieved spectacular results, almost halving the error rates of the best competing approaches1. This success came from the efficient use of GPUs, ReLUs, a new regularization technique called dropout62, and techniques to generate more training examples by deforming the existing ones. This success has brought about a revolution in computer vision; ConvNets are now the dominant approach for almost all recognition and detection tasks4, 58, 59, 63, 64, 65 and approach human performance on some tasks. A recent stunning demonstration combines ConvNets and recurrent net modules for the generation of image captions (Fig. 3). Recent ConvNet architectures have 10 to 20 layers of ReLUs, hundreds of millions of weights, and billions of connections between units. Whereas training such large networks could have taken weeks only two years ago, progress in hardware, software and algorithm parallelization have reduced training times to a few hours. The performance of ConvNet-based vision systems has caused most major technology companies, including Google, Facebook, Microsoft, IBM, Yahoo!, Twitter and Adobe, as well as a quickly growing number of start-ups to initiate research and development projects and to deploy ConvNet-based image understanding products and services. ConvNets are easily amenable to efficient hardware implementations in chips or field-programmable gate arrays66, 67. A number of companies such as NVIDIA, Mobileye, Intel, Qualcomm and Samsung are developing ConvNet chips to enable real-time vision applications in smartphones, cameras, robots and self-driving cars. Deep-learning theory shows that deep nets have two different exponential advantages over classic learning algorithms that do not use distributed representations21. Both of these advantages arise from the power of composition and depend on the underlying data-generating distribution having an appropriate componential structure40. First, learning distributed representations enable generalization to new combinations of the values of learned features beyond those seen during training (for example, 2n combinations are possible with n binary features)68, 69. Second, composing layers of representation in a deep net brings the potential for another exponential advantage70 (exponential in the depth). The hidden layers of a multilayer neural network learn to represent the network's inputs in a way that makes it easy to predict the target outputs. This is nicely demonstrated by training a multilayer neural network to predict the next word in a sequence from a local context of earlier words71. Each word in the context is presented to the network as a one-of-N vector, that is, one component has a value of 1 and the rest are 0. In the first layer, each word creates a different pattern of activations, or word vectors (Fig. 4). In a language model, the other layers of the network learn to convert the input word vectors into an output word vector for the predicted next word, which can be used to predict the probability for any word in the vocabulary to appear as the next word. The network learns word vectors that contain many active components each of which can be interpreted as a separate feature of the word, as was first demonstrated27 in the context of learning distributed representations for symbols. These semantic features were not explicitly present in the input. They were discovered by the learning procedure as a good way of factorizing the structured relationships between the input and output symbols into multiple 'micro-rules'. Learning word vectors turned out to also work very well when the word sequences come from a large corpus of real text and the individual micro-rules are unreliable71. When trained to predict the next word in a news story, for example, the learned word vectors for Tuesday and Wednesday are very similar, as are the word vectors for Sweden and Norway. Such representations are called distributed representations because their elements (the features) are not mutually exclusive and their many configurations correspond to the variations seen in the observed data. These word vectors are composed of learned features that were not determined ahead of time by experts, but automatically discovered by the neural network. Vector representations of words learned from text are now very widely used in natural language applications14, 17, 72, 73, 74, 75, 76. The issue of representation lies at the heart of the debate between the logic-inspired and the neural-network-inspired paradigms for cognition. In the logic-inspired paradigm, an instance of a symbol is something for which the only property is that it is either identical or non-identical to other symbol instances. It has no internal structure that is relevant to its use; and to reason with symbols, they must be bound to the variables in judiciously chosen rules of inference. By contrast, neural networks just use big activity vectors, big weight matrices and scalar non-linearities to perform the type of fast 'intuitive' inference that underpins effortless commonsense reasoning. Before the introduction of neural language models71, the standard approach to statistical modelling of language did not exploit distributed representations: it was based on counting frequencies of occurrences of short symbol sequences of length up to N (called N-grams). The number of possible N-grams is on the order of VN, where V is the vocabulary size, so taking into account a context of more than a handful of words would require very large training corpora. N-grams treat each word as an atomic unit, so they cannot generalize across semantically related sequences of words, whereas neural language models can because they associate each word with a vector of real valued features, and semantically related words end up close to each other in that vector space (Fig. 4). When backpropagation was first introduced, its most exciting use was for training recurrent neural networks (RNNs). For tasks that involve sequential inputs, such as speech and language, it is often better to use RNNs (Fig. 5). RNNs process an input sequence one element at a time, maintaining in their hidden units a 'state vector' that implicitly contains information about the history of all the past elements of the sequence. When we consider the outputs of the hidden units at different discrete time steps as if they were the outputs of different neurons in a deep multilayer network (Fig. 5, right), it becomes clear how we can apply backpropagation to train RNNs. RNNs are very powerful dynamic systems, but training them has proved to be problematic because the backpropagated gradients either grow or shrink at each time step, so over many time steps they typically explode or vanish77, 78. Thanks to advances in their architecture79, 80 and ways of training them81, 82, RNNs have been found to be very good at predicting the next character in the text83 or the next word in a sequence75, but they can also be used for more complex tasks. For example, after reading an English sentence one word at a time, an English 'encoder' network can be trained so that the final state vector of its hidden units is a good representation of the thought expressed by the sentence. This thought vector can then be used as the initial hidden state of (or as extra input to) a jointly trained French 'decoder' network, which outputs a probability distribution for the first word of the French translation. If a particular first word is chosen from this distribution and provided as input to the decoder network it will then output a probability distribution for the second word of the translation and so on until a full stop is chosen17, 72, 76. Overall, this process generates sequences of French words according to a probability distribution that depends on the English sentence. This rather naive way of performing machine translation has quickly become competitive with the state-of-the-art, and this raises serious doubts about whether understanding a sentence requires anything like the internal symbolic expressions that are manipulated by using inference rules. It is more compatible with the view that everyday reasoning involves many simultaneous analogies that each contribute plausibility to a conclusion84, 85. Instead of translating the meaning of a French sentence into an English sentence, one can learn to 'translate' the meaning of an image into an English sentence (Fig. 3). The encoder here is a deep ConvNet that converts the pixels into an activity vector in its last hidden layer. The decoder is an RNN similar to the ones used for machine translation and neural language modelling. There has been a surge of interest in such systems recently (see examples mentioned in ref. 86). RNNs, once unfolded in time (Fig. 5), can be seen as very deep feedforward networks in which all the layers share the same weights. Although their main purpose is to learn long-term dependencies, theoretical and empirical evidence shows that it is difficult to learn to store information for very long78. To correct for that, one idea is to augment the network with an explicit memory. The first proposal of this kind is the long short-term memory (LSTM) networks that use special hidden units, the natural behaviour of which is to remember inputs for a long time79. A special unit called the memory cell acts like an accumulator or a gated leaky neuron: it has a connection to itself at the next time step that has a weight of one, so it copies its own real-valued state and accumulates the external signal, but this self-connection is multiplicatively gated by another unit that learns to decide when to clear the content of the memory. LSTM networks have subsequently proved to be more effective than conventional RNNs, especially when they have several layers for each time step87, enabling an entire speech recognition system that goes all the way from acoustics to the sequence of characters in the transcription. LSTM networks or related forms of gated units are also currently used for the encoder and decoder networks that perform so well at machine translation17, 72, 76. Over the past year, several authors have made different proposals to augment RNNs with a memory module. Proposals include the Neural Turing Machine in which the network is augmented by a 'tape-like' memory that the RNN can choose to read from or write to88, and memory networks, in which a regular network is augmented by a kind of associative memory89. Memory networks have yielded excellent performance on standard question-answering benchmarks. The memory is used to remember the story about which the network is later asked to answer questions. Beyond simple memorization, neural Turing machines and memory networks are being used for tasks that would normally require reasoning and symbol manipulation. Neural Turing machines can be taught 'algorithms'. Among other things, they can learn to output a sorted list of symbols when their input consists of an unsorted sequence in which each symbol is accompanied by a real value that indicates its priority in the list88. Memory networks can be trained to keep track of the state of the world in a setting similar to a text adventure game and after reading a story, they can answer questions that require complex inference90. In one test example, the network is shown a 15-sentence version of the The Lord of the Rings and correctly answers questions such as “where is Frodo now?”89. Unsupervised learning91, 92, 93, 94, 95, 96, 97, 98 had a catalytic effect in reviving interest in deep learning, but has since been overshadowed by the successes of purely supervised learning. Although we have not focused on it in this Review, we expect unsupervised learning to become far more important in the longer term. Human and animal learning is largely unsupervised: we discover the structure of the world by observing it, not by being told the name of every object. Human vision is an active process that sequentially samples the optic array in an intelligent, task-specific way using a small, high-resolution fovea with a large, low-resolution surround. We expect much of the future progress in vision to come from systems that are trained end-to-end and combine ConvNets with RNNs that use reinforcement learning to decide where to look. Systems combining deep learning and reinforcement learning are in their infancy, but they already outperform passive vision systems99 at classification tasks and produce impressive results in learning to play many different video games100. Natural language understanding is another area in which deep learning is poised to make a large impact over the next few years. We expect systems that use RNNs to understand sentences or whole documents will become much better when they learn strategies for selectively attending to one part at a time76, 86. Ultimately, major progress in artificial intelligence will come about through systems that combine representation learning with complex reasoning. Although deep learning and simple reasoning have been used for speech and handwriting recognition for a long time, new paradigms are needed to replace rule-based manipulation of symbolic expressions by operations on large vectors101. Download references The authors would like to thank the Natural Sciences and Engineering Research Council of Canada, the Canadian Institute For Advanced Research (CIFAR), the National Science Foundation and Office of Naval Research for support. Y.L. and Y.B. are CIFAR fellows. Reprints and permissions information is available at www.nature.com/reprints.
Simba: Scaling Deep-Learning Inference with Multi-Chip-Module-Based Architecture Package-level integration using multi-chip-modules (MCMs) is a promising approach for building large-scale systems. Compared to a large monolithic die, an MCM combines many smaller chiplets into a larger system, substantially reducing fabrication and design costs. Current MCMs typically only contain a handful of coarse-grained large chiplets due to the high area, performance, and energy overheads associated with inter-chiplet communication. This work investigates and quantifies the costs and benefits of using MCMs with fine-grained chiplets for deep learning inference, an application area with large compute and on-chip storage requirements. To evaluate the approach, we architected, implemented, fabricated, and tested Simba, a 36-chiplet prototype MCM system for deep-learning inference. Each chiplet achieves 4 TOPS peak performance, and the 36-chiplet MCM package achieves up to 128 TOPS and up to 6.1 TOPS/W. The MCM is configurable to support a flexible mapping of DNN layers to the distributed compute and storage units. To mitigate inter-chiplet communication overheads, we introduce three tiling optimizations that improve data locality. These optimizations achieve up to 16% speedup compared to the baseline layer mapping. Our evaluation shows that Simba can process 1988 images/s running ResNet-50 with batch size of one, delivering inference latency of 0.50 ms.
Control-flow integrity principles, implementations, and applications Current software attacks often build on exploits that subvert machine-code execution. The enforcement of a basic safety property, control-flow integrity (CFI), can prevent such attacks from arbitrarily controlling program behavior. CFI enforcement is simple and its guarantees can be established formally, even with respect to powerful adversaries. Moreover, CFI enforcement is practical: It is compatible with existing software and can be done efficiently using software rewriting in commodity systems. Finally, CFI provides a useful foundation for enforcing further security policies, as we demonstrate with efficient software implementations of a protected shadow call stack and of access control for memory regions.
Error control coding in software radios: an FPGA approach Among the various tasks performed by software radios is the reconfigurability of the error control coding algorithm to match the requirement of the radio personality. In the digital radio processor, proper assignment of tasks between DSPs and FPGAs provides performance improvements over the use of DSPs alone. The error control coding functions are a good candidate to reside on the FPGA side of this functional partition. Unfortunately, good VLSI designs for codes using BCH or Reed-Solomon codes do not map well to FPGAs. Good FPGA designs must parallelize at every opportunity, minimize timing delays through intelligent floor planning, and use each logic block to its fullest. We demonstrate the merits of these concepts by comparing the performance of popular designs of finite field multipliers
Memoryless Approach to the LQ and LQG Problems with Variable Input Delay This note studies the LQ and LQG problems for linear time invariant systems with a single time-varying input delay and instantaneous (memoryless) state feedback. We extend the memoryless state feedback solution proposed in [1] in two directions. We prove that in the deterministic case a memoryless state feedback can be in general optimal only up to a certain delay, for which we provide a sufficient, and sometimes strict, bound. Moreover, we show that this memoryless control is optimal also in the case of time-varying delays and that the quadratic cost functional has the same value as in the case without delay. For time varying delays the control law requires that the relationship between time points in which the input is generated and applied is known and invertible even if the delay function needs not to be differentiable or even continuous. Finally, we prove that the cost functional is bounded also in the stochastic case for the same delay interval as in the deterministic case, but with a larger cost than the delay-less LQG solution.
Neuropixels Data-Acquisition System: A Scalable Platform for Parallel Recording of 10,000+ Electrophysiological Signals. Although CMOS fabrication has enabled a quick evolution in the design of high-density neural probes and neural-recording chips, the scaling and miniaturization of the complete data-acquisition systems has happened at a slower pace. This is mainly due to the complexity and the many requirements that change depending on the specific experimental settings. In essence, the fundamental challenge of a n...
1.01181
0.011128
0.010526
0.010526
0.010526
0.007895
0.00751
0.004882
0.000947
0.000018
0
0
0
0
Network-on-Chip Firewall: Countering Defective and Malicious System-on-Chip Hardware. Mobile devices are in roles where the integrity and confidentiality of their apps and data are of paramount importance. They usually contain a System-on-Chip (SoC), which integrates microprocessors and peripheral Intellectual Property (IP) connected by a Network-on-Chip (NoC). Malicious IP or software could compromise critical data. Some types of attacks can be blocked by controlling data transfers on the NoC using Memory Management Units (MMUs) and other access control mechanisms. However, commodity processors do not provide strong assurances regarding the correctness of such mechanisms, and it is challenging to verify that all access control mechanisms in the system are correctly configured. We propose a NoC Firewall (NoCF) that provides a single locus of control and is amenable to formal analysis. We demonstrate an initial analysis of its ability to resist malformed NoC commands, which we believe is the first effort to detect vulnerabilities that arise from NoC protocol violations perpetrated by erroneous or malicious IP.
Security in MPSoCs: A NoC Firewall and an Evaluation Framework In MPSoC a CPU can access physical resources, such as on-chip memory or I/O devices. Along with normal requests, malevolent ones, generated by malicious processes running in one or more CPUs, could occur. A protection mechanism is therefore required to prevent injection of malicious instructions or data across the system. We propose a selfcontained NoC firewall at the network interface layer which, by checking the physical address against a set of rules, rejects untrusted CPU requests to the on-chip memory, thus protecting all legitimate processes running in a multicore SoC. To sustain high performance, we implement the firewall in hardware, with rule-checking performed at segment-level based on deny rules. Furthermore, to evaluate its impact, we develop a novel framework on top of gem5 simulation environment, coupling ARM technology and an instance of a commercial point-to-point interconnect from STMicroelectronics. Simulation tests include scenarios in which legitimate and malicious processes, running in different CPUs, request access to shared memory. Our results indicate that a firewall implementation at the network interface can have a positive effect on network performance by reducing both end-to-end network delay and power consumption. We also show that our coarse-grain firewall can prevent saturation of the on-chip network and performs better than fine-grain alternatives that perform rule checking at page-level. Simulation results are accompanied with field measurements performed on a Zedboard platform running linux, whereas the NoC Firewall is implemented as a reconfigurable, memory-mapped device on top of AMBA AXI4 interconnect fabric.
WHISK: an uncore architecture for dynamic information flow tracking in heterogeneous embedded SoCs In this paper, we describe for the first time, how Dynamic Information Flow Tracking (DIFT) can be implemented for heterogeneous designs that contain one or more on-chip accelerators attached to a network-on-chip. We observe that implementing DIFT for such systems requires holistic platform level view, i.e., designing individual components in the heterogeneous system to be capable of supporting DIFT is necessary but not sufficient to correctly implement full-system DIFT. Based on this observation we present a new system architecture for implementing DIFT, and also describe wrappers that provide DIFT functionality for third-party IP components. Results show that our implementation minimally impacts performance of programs that do not utilize DIFT, and the price of security is constant for modest amounts of tagging and then sub-linearly increases with the amount of tagging.
PUMP: a programmable unit for metadata processing We introduce the Programmable Unit for Metadata Processing (PUMP), a novel software-hardware element that allows flexible computation with uninterpreted metadata alongside the main computation with modest impact on runtime performance (typically 10--40% for single policies, compared to metadata-free computation on 28 SPEC CPU2006 C, C++, and Fortran programs). While a host of prior work has illustrated the value of ad hoc metadata processing for specific policies, we introduce an architectural model for extensible, programmable metadata processing that can handle arbitrary metadata and arbitrary sets of software-defined rules in the spirit of the time-honored 0-1-∞ rule. Our results show that we can match or exceed the performance of dedicated hardware solutions that use metadata to enforce a single policy, while adding the ability to enforce multiple policies simultaneously and achieving flexibility comparable to software solutions for metadata processing. We demonstrate the PUMP by using it to support four diverse safety and security policies---spatial and temporal memory safety, code and data taint tracking, control-flow integrity including return-oriented-programming protection, and instruction/data separation---and quantify the performance they achieve, both singly and in combination.
Towards Hardware-Assisted Security for IoT Systems As computing devices become more commonplace in every day life, we have seen an increase of possible attacks on commercial devices and critical infrastructure. As a result, both academia and industry have proposed solutions to mitigate or outright eliminate the ever expanding set of viable targets. Initially, this resulted in an influx of software-based defenses against these emerging threats. Unfortunately, it was found that software solutions could be bypassed with more advanced attacks and often resulted in high performance overhead. As such, hardware-assisted security defenses have been developed to provide improved security while keeping performance overhead to manageable levels, especially for IoT devices. In this paper, we will provide a survey of prominent hardware-assisted security defenses. We will enumerate the attacks these defenses aim to protect, as well as their effectiveness. We will also discuss the implications in both performance and system design. A comparison between approaches that target the same set of issues, and possible directions for future research will be presented.
A Cautionary Tale about Detecting Malware Using Hardware Performance Counters and Machine Learning Editor’s note: This article revisits the literature around the use of hardware performance counters for malware detection, highlighting discrepancies, and providing a retrospection of challenges for future research. —Rosario Cammarota, Intel Labs —Francesco Regazzoni, University of Amsterdam and Università della Svizzera Italiana.
NOC-centric Security of Reconfigurable SoC This paper presents a first solution for NoC-based communication security. Our proposal is based on simple network interfaces implementing distributed security rule checking and a separation between se- curity and application channels. We detail a four- step security policy and show how, with usual NOC techniques, a designer can protect a reconfigurable SOC against attacks that result in abnormal com- munication behaviors. We introduce a new kind of relative and self-complemented street-sign routing adapted to path-based IP identification and recon- figurable architectures needs. Our approach is illus- trated with a synthetic Set-Top box, we also show how to transform a real-life bus-based security so- lution to match our NOC-based architecture.
Threaded code The concept of “threaded code” is presented as an alternative to machine language code. Hardware and software realizations of it are given. In software it is realized as interpretive code not needing an interpreter. Extensions and optimizations are mentioned.
A Logic-in-Memory Computer If, as presently projected, the cost of microelectronic arrays in the future will tend to reflect the number of pins on the array rather than the number of gates, the logic-in-memory array is an extremely attractive computer component. Such an array is essentially a microelectronic memory with some combinational logic associated with each storage element. A logic-in-memory computer is described that is organized around a logic-enhanced ``cache'' memory array. Used as a cache, a logic-in-memory array performs as a high-speed buffer between a conventional CPU and a conventional memory. The effect on the computer system of the cache and its control mechanism is to make the main memory appear to have all of the processing capabilities and almost the same performance as the cache. Operations within the array are naturally organized as operations on blocks of data called ``sectors.'' Among the operations that can be performed are arithmetic and logical operations on pairs of elements from two sectors, and a variety of associative search operations on a single sector. For such operations, the main memory of the computer appears to the program to be composed of a collection of logic-in-memory arrays, each the size of a sector. Because of the high-speed, highly parallel sector operations, the logic-in-memory computer points to a new direction for achieving orders of magnitude increase in computer performance. Moreover, since the computer is specifically organized for large-scale integration, the increased performance might be obtained for a comparatively small dollar cost.
On the evolution of user interaction in Facebook Online social networks have become extremely popular; numerous sites allow users to interact and share content using social links. Users of these networks often establish hundreds to even thousands of social links with other users. Recently, researchers have suggested examining the activity network - a network that is based on the actual interaction between users, rather than mere friendship - to distinguish between strong and weak links. While initial studies have led to insights on how an activity network is structurally different from the social network itself, a natural and important aspect of the activity network has been disregarded: the fact that over time social links can grow stronger or weaker. In this paper, we study the evolution of activity between users in the Facebook social network to capture this notion. We find that links in the activity network tend to come and go rapidly over time, and the strength of ties exhibits a general decreasing trend of activity as the social network link ages. For example, only 30% of Facebook user pairs interact consistently from one month to the next. Interestingly, we also find that even though the links of the activity network change rapidly over time, many graph-theoretic properties of the activity network remain unchanged.
A Digital Requantizer With Shaped Requantization Noise That Remains Well Behaved After Nonlinear Distortion A major problem in oversampling digital-to-analog converters and fractional-N frequency synthesizers, which are ubiquitous in modern communication systems, is that the noise they introduce contains spurious tones. The spurious tones are the result of digitally generated, quantized signals passing through nonlinear analog components. This paper presents a new method of digital requantization called successive requantization, special cases of which avoids the spurious tone generation problem. Sufficient conditions are derived that ensure certain statistical properties of the quantization noise, including the absence of spurious tones after nonlinear distortion. A practical example is presented and shown to satisfy these conditions.
Modeling of software radio aspects by mapping of SDL and CORBA With the evolution of 3rd generation mobile communications standardization, the software radio concept has the potential to offer a pragmatic solution - a software implementation that allows the mobile terminal to adapt dynamically to its radio environment. The mapping of SDL and CORBA mechanisms is introduced, in order to provide a generic platform for the implementation of future mobile services, supporting standardized interfaces and manufacturer platform independent object and service functionality description. For the functional entity diagram model, it is proposed that the functional entities be designed as objects, the functional entities group as 'open' object oriented SDL platforms, and the interfaces between them as CORBA IDLs, communicating via the ORB in a generic implementation and location independent way. The functional entity groups are proposed to be modeled as SDL block types, while the functional entities and sub-entities as SDL process and service types. The objects interact with each other like client or server objects requesting or receiving services from other objects. Every object has a CORBA IDL interface, which allows every component to be distributed in an optimum way by providing a standardized infrastructure, ensuring interoperability, flexibility, reusability, transparency and management capabilities.
An energy-efficient VLSI architecture for pattern recognition via deep embedding of computation in SRAM In this paper, we propose the concept of compute memory, where computation is deeply embedded into the memory (SRAM). This deep embedding enables multi-row read access and analog signal processing. Compute memory exploits the relaxed precision and linearity requirements of pattern recognition applications. System-level simulations incorporating various deterministic errors from analog signal chain demonstrates the limited accuracy of analog processing does not significantly degrade the system performance, which means the probability of pattern detection is minimally impacted. The estimated energy saving is 63 % as compared to the conventional system with standard embedded memory and parallel processing architecture, for 256×256 target image.
A 1V 3.5 μW Bio-AFE With Chopper-Capacitor-Chopper Integrator-Based DSL and Low Power GM-C Filter This brief presents a low-noise, low-power bio-signal acquisition analog front-end (Bio-AFE). It mainly includes a capacitively coupled chopper-stabilized instrumentation amplifier (CCIA), a programmable gain amplifier (PGA), a low-pass filter (LPF), and a successive approximation analog to digital converter (SAR ADC). A chopper-capacitor-chopper integrator based DC servo loop (C3IB-DSL...
1.078
0.076667
0.066667
0.066667
0.066667
0.066667
0.003185
0
0
0
0
0
0
0
Tensor Decompositions and Applications This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or $N$-way array. Decompositions of higher-order tensors (i.e., $N$-way arrays with $N \geq 3$) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors.
Hidden factors and hidden topics: understanding rating dimensions with review text In order to recommend products to users we must ultimately predict how a user will respond to a new product. To do so we must uncover the implicit tastes of each user as well as the properties of each product. For example, in order to predict whether a user will enjoy Harry Potter, it helps to identify that the book is about wizards, as well as the user's level of interest in wizardry. User feedback is required to discover these latent product and user dimensions. Such feedback often comes in the form of a numeric rating accompanied by review text. However, traditional methods often discard review text, which makes user and product latent dimensions difficult to interpret, since they ignore the very text that justifies a user's rating. In this paper, we aim to combine latent rating dimensions (such as those of latent-factor recommender systems) with latent review topics (such as those learned by topic models like LDA). Our approach has several advantages. Firstly, we obtain highly interpretable textual labels for latent rating dimensions, which helps us to `justify' ratings with text. Secondly, our approach more accurately predicts product ratings by harnessing the information present in review text; this is especially true for new products and users, who may have too few ratings to model their latent factors, yet may still provide substantial information from the text of even a single review. Thirdly, our discovered topics can be used to facilitate other tasks such as automated genre discovery, and to identify useful and representative reviews.
On the evolution of user interaction in Facebook Online social networks have become extremely popular; numerous sites allow users to interact and share content using social links. Users of these networks often establish hundreds to even thousands of social links with other users. Recently, researchers have suggested examining the activity network - a network that is based on the actual interaction between users, rather than mere friendship - to distinguish between strong and weak links. While initial studies have led to insights on how an activity network is structurally different from the social network itself, a natural and important aspect of the activity network has been disregarded: the fact that over time social links can grow stronger or weaker. In this paper, we study the evolution of activity between users in the Facebook social network to capture this notion. We find that links in the activity network tend to come and go rapidly over time, and the strength of ties exhibits a general decreasing trend of activity as the social network link ages. For example, only 30% of Facebook user pairs interact consistently from one month to the next. Interestingly, we also find that even though the links of the activity network change rapidly over time, many graph-theoretic properties of the activity network remain unchanged.
Coarse grain reconfigurable architecture (embedded tutorial) The paper gives a brief survey over a decade of R&D on coarse grain reconfigurable hardware and related compilation techniques and points out its significance to the emerging discipline of reconfigurable computing.
Cambricon-F: machine learning computers with fractal von neumann architecture Machine learning techniques are pervasive tools for emerging commercial applications and many dedicated machine learning computers on different scales have been deployed in embedded devices, servers, and data centers. Currently, most machine learning computer architectures still focus on optimizing performance and energy efficiency instead of programming productivity. However, with the fast development in silicon technology, programming productivity, including programming itself and software stack development, becomes the vital reason instead of performance and power efficiency that hinders the application of machine learning computers. In this paper, we propose Cambricon-F, which is a series of homogeneous, sequential, multi-layer, layer-similar, machine learning computers with the same ISA. A Cambricon-F machine has a fractal von Neumann architecture to iteratively manage its components: it is with von Neumann architecture and its processing components (sub-nodes) are still Cambricon-F machines with von Neumann architecture and the same ISA. Since different Cambricon-F instances with different scales can share the same software stack on their common ISA, Cambricon-Fs can significantly improve the programming productivity. Moreover, we address four major challenges in Cambricon-F architecture design, which allow Cambricon-F to achieve a high efficiency. We implement two Cambricon-F instances at different scales, i.e., Cambricon-F100 and Cambricon-F1. Compared to GPU based machines (DGX-1 and 1080Ti), Cambricon-F instances achieve 2.82x, 5.14x better performance, 8.37x, 11.39x better efficiency on average, with 74.5%, 93.8% smaller area costs, respectively.
Hardware acceleration of database operations As the amount of memory in database systems grows, entire database tables, or even databases, are able to fit in the system's memory, making in-memory database operations more prevalent. This shift from disk-based to in-memory database systems has contributed to a move from row-wise to columnar data storage. Furthermore, common database workloads have grown beyond online transaction processing (OLTP) to include online analytical processing and data mining. These workloads analyze huge datasets that are often irregular and not indexed, making traditional database operations like joins much more expensive. In this paper we explore using dedicated hardware to accelerate in-memory database operations. We present hardware to accelerate the selection process of compacting a single column into a linear column of selected data, joining two sorted columns via merging, and sorting a column. Finally, we put these primitives together to accelerate an entire join operation. We implement a prototype of this system using FPGAs and show substantial improvements in both absolute throughput and utilization of memory bandwidth. Using the prototype as a guide, we explore how the hardware resources required by our design change with the desired throughput.
Capstan: A Vector RDA for Sparsity ABSTRACT This paper proposes Capstan: a scalable, parallel-patterns-based, reconfigurable dataflow accelerator (RDA) for sparse and dense tensor applications. Instead of designing for one application, we start with common sparse data formats, each of which supports multiple applications. Using a declarative programming model, Capstan supports application-independent sparse iteration and memory primitives that can be mapped to vectorized, high-performance hardware. We optimize random-access sparse memories with configurable out-of-order execution to increase SRAM random-access throughput from 32% to 80%. For a variety of sparse applications, Capstan with DDR4 memory is 18× faster than a multi-core CPU baseline, while Capstan with HBM2 memory is 16× faster than an Nvidia V100 GPU. For sparse applications that can be mapped to Plasticine, a recent dense RDA, Capstan is 7.6× to 365× faster and only 16% larger.
Polymorphic Pipeline Array: A flexible multicore accelerator with virtualized execution for mobile multimedia applications Mobile computing in the form of smart phones, netbooks, and personal digital assistants has become an integral part of our everyday lives. Moving ahead to the next generation of mobile devices, we believe that multimedia will become a more critical and product-differentiating feature. High definition audio and video as well as 3D graphics provide richer interfaces and compelling capabilities. However, these algorithms also bring different computational challenges than wireless signal processing. Multimedia algorithms are more complex featuring more control flow and variable computational requirements where execution time is not dominated by innermost vector loops. Further, data access is more complex where media applications typically operate on multi-dimensional vectors of data rather than single-dimensional vectors with simple strides. Thus, the design of current mobile platforms requires reexamination to account for these new application domains. In this work, we focus on the design of a programmable, low-power accelerator for multimedia algorithms referred to as a polymorphic pipeline array, or PPA. The PPA is designed with flexibility and programmability as first-order requirements to enable the hardware to be dynamically customizable to the application. PPAs exploit pipeline parallelism found in streaming applications to create a coarse-grain hardware pipeline to execute streaming media applications. PPA resources are allocated to each stage depending on its size and ability to exploit fine-grain parallelism. Experimental results show that real-time media applications can take advantage of the static and dynamic configurability for increased power efficiency.
A Case for Intelligent RAM Two trends call into question the current practice of microprocessors and DRAMs being fabricated as different chips on different fab lines: 1) the gap between processor and DRAM speed is growing at 50% per year; and 2) the size and organization of memory on a single DRAM chip is becoming awkward to use in a system, yet size is growing at 60% per year. Intelligent RAM, or IRAM, merges processing and memory into a single chip to lower memory latency, increase memory bandwidth, and improve energy efficiency as well as to allow more flexible selection of memory size and organization. In addition, IRAM promises savings in power and board area. We review the state of microprocessors and DRAMs today, explore some of the opportunities and challenges for IRAMs, and finally estimate performance and energy efficiency of three IRAM designs.
Interval Type-2 Fuzzy Logic Systems Made Simple To date, because of the computational complexity of using a general type-2 fuzzy set (T2 FS) in a T2 fuzzy logic system (FLS), most people only use an interval T2 FS, the result being an interval T2 FLS (IT2 FLS). Unfortunately, there is a heavy educational burden even to using an IT2 FLS. This burden has to do with first having to learn general T2 FS mathematics, and then specializing it to an IT2 FSs. In retrospect, we believe that requiring a person to use T2 FS mathematics represents a barrier to the use of an IT2 FLS. In this paper, we demonstrate that it is unnecessary to take the route from general T2 FS to IT2 FS, and that all of the results that are needed to implement an IT2 FLS can be obtained using T1 FS mathematics. As such, this paper is a novel tutorial that makes an IT2 FLS much more accessible to all readers of this journal. We can now develop an IT2 FLS in a much more straightforward way
The emergence of a networking primitive in wireless sensor networks The wireless sensor network community approached networking abstractions as an open question, allowing answers to emerge with time and experience. The Trickle algorithm has become a basic mechanism used in numerous protocols and systems. Trickle brings nodes to eventual consistency quickly and efficiently while remaining remarkably robust to variations in network density, topology, and dynamics. Instead of flooding a network with packets, Trickle uses a "polite gossip" policy to control send rates so each node hears just enough packets to stay consistent. This simple mechanism enables Trickle to scale to 1000-fold changes in network density, reach consistency in seconds, and require only a few bytes of state yet impose a maintenance cost of a few sends an hour. Originally designed for disseminating new code, experience has shown Trickle to have much broader applicability, including route maintenance and neighbor discovery. This paper provides an overview of the research challenges wireless sensor networks face, describes the Trickle algorithm, and outlines several ways it is used today.
Charge redistribution loss consideration in optimal charge pump design The charge redistribution loss of capacitors is reviewed, and then employed in the optimal capacitor assignment of charge pumps. The average output voltage is unambiguously defined, and efficiency due to redistribution loss is discussed. Analyses are confirmed by Hspice simulations on charge pumps designed using a 0.35 μm CMOS process.
Design of Fuzzy-Neural-Network-Inherited Backstepping Control for Robot Manipulator Including Actuator Dynamics This study presents the design and analysis of an intelligent control system that inherits the systematic and recursive design methodology for an n-link robot manipulator, including actuator dynamics, in order to achieve a high-precision position tracking with a firm stability and robustness. First, the coupled higher order dynamic model of an n-link robot manipulator is introduced briefly. Then, a conventional backstepping control (BSC) scheme is developed for the joint position tracking of the robot manipulator. Moreover, a fuzzy-neural-network-inherited BSC (FNNIBSC) scheme is proposed to relax the requirement of detailed system information to improve the robustness of BSC and to deal with the serious chattering that is caused by the discontinuous function. In the FNNIBSC strategy, the FNN framework is designed to mimic the BSC law, and adaptive tuning algorithms for network parameters are derived in the sense of the projection algorithm and Lyapunov stability theorem to ensure the network convergence as well as stable control performance. Numerical simulations and experimental results of a two-link robot manipulator that are actuated by dc servomotors are provided to justify the claims of the proposed FNNIBSC system, and the superiority of the proposed FNNIBSC scheme is also evaluated by quantitative comparison with previous intelligent control schemes.
A 12.6 mW, 573-2901 kS/s Reconfigurable Processor for Reconstruction of Compressively Sensed Physiological Signals. This article presents a reconfigurable processor based on the alternating direction method of multipliers (ADMM) algorithm for reconstructing compressively sensed physiological signals. The architecture is flexible to support physiological ExG [electrocardiography (ECG), electromyography (EMG), and electroencephalography (EEG)] signal with various signal dimensions (128, 256, 384, and 512). Data c...
1.05625
0.0575
0.05
0.05
0.05
0.02625
0.016667
0.002143
0.000019
0
0
0
0
0
Digital Interpolating Phase Modulator Implementation for Outphasing PA Recent wireless architectures have increased requirements in terms of power consumption and linearity. These requirements concern specially the power amplifier (PA) responsible for driving the antenna. One architecture proposed for this role is the outphasing amplifier, which realizes linear amplification using non-linear components. To generate the waveforms required by the switched-mode outphasing architecture, the Digital Interpolating Phase Modulator (DIPM) was proposed recently. It works by reconstructing a constant amplitude RF signal using several DSP engines. However, the authors only describe an implementation with four signal processing engines by hypothesis. In this paper, an implementation of the DIPM algorithm is developed, alongside an exploration on the number of signal processing engines used. Furthermore, the digital design is synthesized in two different process technologies to contrast power and area utilization. Our simulations confirm their choice of number of engines if power is a constraint. However, other requirements, such as technology used, may lead to different solutions.
Emerging multi-level architectures and unbalanced mismatch calibration technique for high-efficient and high-linear LINC systems Linear amplification with nonlinear components (LINC) provide excellent linearity with high efficiency by adopting switch mode power amplifiers. One of the issues regarding LINC is the efficiency degradation due to wasted power at the power combiner when phase difference is large. In this paper, two efficiency enhancement architectures are presented: multi-level LINC (MLINC) architecture improves transmitter efficiency compared to a conventional LINC system, and the proposed uneven multi-level LINC (UMLINC) can further improve the efficiency. Another problem of LINC is the amplitude and phase mismatches between the two signal paths degrading the linearity performance. To address the problem, an unbalanced mismatch calibration technique for the proposed LINC system is introduced.
A 1.5–1.9-GHz All-Digital Tri-Phasing Transmitter With an Integrated Multilevel Class-D Power Amplifier Achieving 100-MHz RF Bandwidth We present a prototype RF transmitter with an integrated multilevel class-D power amplifier (PA), implemented in 28-nm CMOS. The transmitter utilizes tri-phasing modulation, which combines three constant-envelope phase-modulated signals with coarse amplitude modulation in the PA. This new architecture achieves the back-off efficiency of multilevel outphasing, without linearity-degrading discontinuities in the RF output waveform. Because all signal processing is performed in the time domain up to the PA, the entire system is implemented with digital circuits and structures, thus also enabling the use of synthesis and place-and-route CAD tools for the RF front end. The effectiveness of the digital tri-phasing concept is supported by extensive measurement results. Improved wideband performance is validated through the transmission of orthogonal frequency-division multiplexing (OFDM) bandwidths up to 100 MHz. Enhanced reconfigurability is demonstrated with non-contiguous carrier aggregation and digital carrier generation between 1.5 and 1.9 GHz without a frequency synthesizer. For a 20-MHz 256-QAM OFDM signal at 3.5% error vector magnitude (EVM), the transmitter achieves 22.6-dBm output power and 14.6% PA efficiency. Thanks to the high linearity enabled by tri-phasing, no digital predistortion is needed for the PA.
CORDIC-based computation of ArcCos and ArcSin CORDIC--based algorithms to compute cos^{-1}(t), sin^{-1}(t) and sqrt{1-t^{2}} are proposed. The implementation requires a standard CORDIC module plus a module to compute the direction of rotation, this being the same hardware required for the extended CORDIC vectoring, recently proposed by the authors. Although these functions can be obtained as a special case of this extended vectoring, the specific algorithm we propose here presents two significant improvements: (1) it achieves an angle granularity of 2^{-n} using the same datapath width as the standard CORDIC algorithm (about n bits, instead of about 2n which would be required using the extended vectoring), and (2) no repetitions of iterations are needed. The proposed algorithm is compatible with the extended vectoring and, in contrast with previous implementations, the number of iterations and the delay of each iteration are the same as for the conventional CORDIC algorithm.
A Wideband ΔΣ Digital-RF Modulator for High Data Rate Transmitters A wideband software-defined digital-RF modulator targeting Gb/s data rates is presented. The modulator consists of a 2.625-GS/s digital DeltaSigma modulator, a 5.25-GHz direct digital-RF converter, and a fourth-order auto-tuned passive LC RF bandpass filter. The architecture removes high dynamic range analog circuits from the baseband signal path, replacing them with high-speed digital circuits to take advantage of digital CMOS scaling. The integration of the digital-RF converter with an RF bandpass reconstruction filter eliminates spurious signals and noise associated with direct digital-RF conversion. An efficient passgate adder circuit lowers the power consumption of the high-speed digital processing and a quadrature digital-IF approach is employed to reduce LO feedthrough and image spurs. The digital-RF modulator is software programmable to support variable bandwidths, adaptive modulation schemes, and multi-channel operation within a frequency band. A prototype IC built in 0.13-mum CMOS demonstrates a data rate of 1.2 Gb/s using OFDM modulation in a bandwidth of 200 MHz centered at 5.25 GHz. In-band LO and image spurs are less than -59 dBc without requiring calibration. The modulator consumes 187 mW and occupies a die area of 0.72 mm2.
Unreliable failure detectors for reliable distributed systems We introduce the concept of unreliable failure detectors and study how they can be used to solve Consensus in asynchronous systems with crash failures. We characterise unreliable failure detectors in terms of two properties—completeness and accuracy. We show that Consensus can be solved even with unreliable failure detectors that make an infinite number of mistakes, and determine which ones can be used to solve Consensus despite any number of crashes, and which ones require a majority of correct processes. We prove that Consensus and Atomic Broadcast are reducible to each other in asynchronous systems with crash failures; thus, the above results also apply to Atomic Broadcast. A companion paper shows that one of the failure detectors introduced here is the weakest failure detector for solving Consensus [Chandra et al. 1992].
A Fast and High Quality Multilevel Scheme for Partitioning Irregular Graphs Recently, a number of researchers have investigated a class of graph partitioning algorithms that reduce the size of the graph by collapsing vertices and edges, partition the smaller graph, and then uncoarsen it to construct a partition for the original graph (Bui and Jones, Proc. of the 6th SIAM Conference on Parallel Processing for Scientific Computing, 1993, 445-452; Hen- drickson and Leland, A Multilevel Algorithm for Partitioning Graphs, Tech. report SAND 93-1301, Sandia National Laboratories, Albuquerque, NM, 1993). From the early work it was clear that multilevel techniques held great promise; however, it was not known if they can be made to con- sistently produce high quality partitions for graphs arising in a wide range of application domains. We investigate the effectiveness of many different choices for all three phases: coarsening, partition of the coarsest graph, and refinement. In particular, we present a new coarsening heuristic (called heavy-edge heuristic) for which the size of the partition of the coarse graph is within a small factor of the size of the final partition obtained after multilevel refinement. We also present a much faster variation of the Kernighan-Lin (KL) algorithm for refining during uncoarsening. We test our scheme on a large number of graphs arising in various domains including finite element methods, linear pro- gramming, VLSI, and transportation. Our experiments show that our scheme produces partitions that are consistently better than those produced by spectral partitioning schemes in substantially smaller time. Also, when our scheme is used to compute fill-reducing orderings for sparse matrices, it produces orderings that have substantially smaller fill than the widely used multiple minimum degree algorithm.
Controllability and observability of Boolean control networks The controllability and observability of Boolean control networks are investigated. After a brief review on converting a logic dynamics to a discrete-time linear dynamics with a transition matrix, some formulas are obtained for retrieving network and its logical dynamic equations from this network transition matrix. Based on the discrete-time dynamics, the controllability via two kinds of inputs is revealed by providing the corresponding reachable sets precisely. Then the problem of observability is also solved by giving necessary and sufficient conditions.
From few to many: illumination cone models for face recognition under variable lighting and pose We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render驴or synthesize驴images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone (based on Euclidean distance within the image space). We test our face recognition method on 4,050 images from the Yale Face Database B; these images contain 405 viewing conditions (9 poses $\times$ 45 illumination conditions) for 10 individuals. The method performs almost without error, except on the most extreme lighting directions, and significantly outperforms popular recognition methods that do not use a generative model.
Halide: a language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines Image processing pipelines combine the challenges of stencil computations and stream programs. They are composed of large graphs of different stencil stages, as well as complex reductions, and stages with global or data-dependent access patterns. Because of their complex structure, the performance difference between a naive implementation of a pipeline and an optimized one is often an order of magnitude. Efficient implementations require optimization of both parallelism and locality, but due to the nature of stencils, there is a fundamental tension between parallelism, locality, and introducing redundant recomputation of shared values. We present a systematic model of the tradeoff space fundamental to stencil pipelines, a schedule representation which describes concrete points in this space for each stage in an image processing pipeline, and an optimizing compiler for the Halide image processing language that synthesizes high performance implementations from a Halide algorithm and a schedule. Combining this compiler with stochastic search over the space of schedules enables terse, composable programs to achieve state-of-the-art performance on a wide range of real image processing pipelines, and across different hardware architectures, including multicores with SIMD, and heterogeneous CPU+GPU execution. From simple Halide programs written in a few hours, we demonstrate performance up to 5x faster than hand-tuned C, intrinsics, and CUDA implementations optimized by experts over weeks or months, for image processing applications beyond the reach of past automatic compilers.
CoCo: coding-based covert timing channels for network flows In this paper, we propose CoCo, a novel framework for establishing covert timing channels. The CoCo covert channel modulates the covert message in the inter-packet delays of the network flows, while a coding algorithm is used to ensure the robustness of the covert message to different perturbations. The CoCo covert channel is adjustable: by adjusting certain parameters one can trade off different features of the covert channel, i.e., robustness, rate, and undetectability. By simulating the CoCo covert channel using different coding algorithms we show that CoCo improves the covert robustness as compared to the previous research, while being practically undetectable.
Analysis of Direct-Conversion IQ Transmitters With 25% Duty-Cycle Passive Mixers. The performance of direct-conversion IQ transmitters with 25% duty-cycle passive mixers is analyzed. The up-conversion transfer function is calculated and it is shown that due to lack of reverse isolation of the passive mixer, the high- and low-side conversion gains can be different. The contribution of thermal noise from mixer switches to the total output noise of the transmitter is formulated. I...
Understanding contention-based channels and using them for defense Microarchitectural resources such as caches and predictors can be used to leak information across security domains. Significant prior work has demonstrated attacks and defenses for specific types of such microarchitectural side and covert channels. In this paper, we introduce a general mathematical study of microarchitectural channels using information theory. Our conceptual contribution is a simple mathematical abstraction that captures the common characteristics of all microarchitectural channels. We call this the Bucket model and it reveals that microarchitectural channels are fundamentally different from side and covert channels in networking. We then quantify the communication capacity of several microarchitectural covert channels (including channels that rely on performance counters, AES hardware and memory buses) and measure bandwidths across both KVM based heavy-weight virtualization and light-weight operating-system level isolation. We demonstrate channel capacities that are orders of magnitude higher compared to what was previously considered possible. Finally, we introduce a novel way of detecting intelligent adversaries that try to hide while running covert channel eavesdropping attacks. Our method generalizes a prior detection scheme (that modeled static adversaries) by introducing noise that hides the detection process from an intelligent eavesdropper.
Multi-Channel Neural Recording Implants: A Review. The recently growing progress in neuroscience research and relevant achievements, as well as advancements in the fabrication process, have increased the demand for neural interfacing systems. Brain-machine interfaces (BMIs) have been revealed to be a promising method for the diagnosis and treatment of neurological disorders and the restoration of sensory and motor function. Neural recording implants, as a part of BMI, are capable of capturing brain signals, and amplifying, digitizing, and transferring them outside of the body with a transmitter. The main challenges of designing such implants are minimizing power consumption and the silicon area. In this paper, multi-channel neural recording implants are surveyed. After presenting various neural-signal features, we investigate main available neural recording circuit and system architectures. The fundamental blocks of available architectures, such as neural amplifiers, analog to digital converters (ADCs) and compression blocks, are explored. We cover the various topologies of neural amplifiers, provide a comparison, and probe their design challenges. To achieve a relatively high SNR at the output of the neural amplifier, noise reduction techniques are discussed. Also, to transfer neural signals outside of the body, they are digitized using data converters, then in most cases, the data compression is applied to mitigate power consumption. We present the various dedicated ADC structures, as well as an overview of main data compression methods.
1.2
0.2
0.2
0.1
0.016667
0
0
0
0
0
0
0
0
0
A UWB Impulse-Radio Timed-Array Radar With Time-Shifted Direct-Sampling Architecture in 0.18- CMOS This paper presents a ultra-wideband (UWB) impulse radio timed-array radar utilizing time-shifted direct-sampling architecture. Time shift between the sampling time of the transmitter and the receiver determines the time of arrival (TOA), and a four-element timed antenna array enables beamforming. The different time shifts among the channels at the receiver determine the object's direction of arrival (DOA). Transmitter channels have different shifts, as well, to enhance spatial selectivity. The direct-sampling receiver reconstructs the scattered waveform in the digital domain, which provides full freedom to the backend digital signal processing. The on-chip digital-to-time converter (DTC) provides all the necessary timing with a fine resolution and wide range. The proposed architecture has a range and azimuth resolution of 0.75 cm and 3 degrees, respectively. The transmitter is capable of synthesizing a variety of pulses within 800 ps at a sampling rate of 10 GS/s. The receiver has an equivalent sampling frequency of 20 GS/s while supporting the RF bandwidth from 2 to 4 GHz. The proposed designs were fabricated in a 0.18- μm standard CMOS technology with a die size of 5.4×3.3 mm2 and 5.4×5.8 mm2 for the transmitter and the receiver, respectively.
On the spectral and power requirements for ultra-wideband transmission UWB systems based on impulse radio have the potential to provide very high data rates over short distances. In this paper, a new pulse shape is presented that satisfies the FCC spectral mask. Using this pulse, the link budget is calculated to quantify the relationship between data rate and distance. It is shown that UWB can be a good candidate for reliably transmitting 100 Mbps over distances at about 10 meters.
Breath activity monitoring with wearable UWB radars: measurement and analysis of the pulses reflected by the human body. Objective: Measurements of ultrawideband (UWB) pulses reflected by the human body are conducted to evidence the differences in the received signal time behaviors due to respiration phases, and to experimentally verify previously obtained numerical results on the body&#39;s organs responsible for pulse reflection. Methods: Two experimental setups are used. The first one is based on a commercially avail...
A $4\times4$ IR UWB Timed-Array Radar Based on 16-Channel Transmitter and Sampling Capacitor Reused Receiver A 4 × 4 impulse radio (IR) ultra-wide band (UWB) timed-array radar is proposed in this brief based on 16-channel all-digital transmitter and sampling capacitor reused receiver. 3-D beamforming is achieved by the 16-channel (4 × 4) planar low power IR UWB beamforming transmitter. UWB receiver adopts energy detection and reuses the integrating capacitor as C-DAC within an 8-bit SAR ADC to save area ...
A Continuous Sweep-Clock-Based Time-Expansion Impulse-Radio Radar. This paper presents a single-chip impulse-radio (IR) radar transceiver that utilizes a novel continuous sweep-clock generator. While requiring low power and small area, the proposed clock generator enables a versatile IR radar operation with millimeter resolution. The radar detection range and update rate are adjustable by an on-chip delay command circuit or by an external master. The IR radar tra...
Unreliable failure detectors for reliable distributed systems We introduce the concept of unreliable failure detectors and study how they can be used to solve Consensus in asynchronous systems with crash failures. We characterise unreliable failure detectors in terms of two properties—completeness and accuracy. We show that Consensus can be solved even with unreliable failure detectors that make an infinite number of mistakes, and determine which ones can be used to solve Consensus despite any number of crashes, and which ones require a majority of correct processes. We prove that Consensus and Atomic Broadcast are reducible to each other in asynchronous systems with crash failures; thus, the above results also apply to Atomic Broadcast. A companion paper shows that one of the failure detectors introduced here is the weakest failure detector for solving Consensus [Chandra et al. 1992].
A Fast and High Quality Multilevel Scheme for Partitioning Irregular Graphs Recently, a number of researchers have investigated a class of graph partitioning algorithms that reduce the size of the graph by collapsing vertices and edges, partition the smaller graph, and then uncoarsen it to construct a partition for the original graph (Bui and Jones, Proc. of the 6th SIAM Conference on Parallel Processing for Scientific Computing, 1993, 445-452; Hen- drickson and Leland, A Multilevel Algorithm for Partitioning Graphs, Tech. report SAND 93-1301, Sandia National Laboratories, Albuquerque, NM, 1993). From the early work it was clear that multilevel techniques held great promise; however, it was not known if they can be made to con- sistently produce high quality partitions for graphs arising in a wide range of application domains. We investigate the effectiveness of many different choices for all three phases: coarsening, partition of the coarsest graph, and refinement. In particular, we present a new coarsening heuristic (called heavy-edge heuristic) for which the size of the partition of the coarse graph is within a small factor of the size of the final partition obtained after multilevel refinement. We also present a much faster variation of the Kernighan-Lin (KL) algorithm for refining during uncoarsening. We test our scheme on a large number of graphs arising in various domains including finite element methods, linear pro- gramming, VLSI, and transportation. Our experiments show that our scheme produces partitions that are consistently better than those produced by spectral partitioning schemes in substantially smaller time. Also, when our scheme is used to compute fill-reducing orderings for sparse matrices, it produces orderings that have substantially smaller fill than the widely used multiple minimum degree algorithm.
Controllability and observability of Boolean control networks The controllability and observability of Boolean control networks are investigated. After a brief review on converting a logic dynamics to a discrete-time linear dynamics with a transition matrix, some formulas are obtained for retrieving network and its logical dynamic equations from this network transition matrix. Based on the discrete-time dynamics, the controllability via two kinds of inputs is revealed by providing the corresponding reachable sets precisely. Then the problem of observability is also solved by giving necessary and sufficient conditions.
From few to many: illumination cone models for face recognition under variable lighting and pose We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render驴or synthesize驴images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone (based on Euclidean distance within the image space). We test our face recognition method on 4,050 images from the Yale Face Database B; these images contain 405 viewing conditions (9 poses $\times$ 45 illumination conditions) for 10 individuals. The method performs almost without error, except on the most extreme lighting directions, and significantly outperforms popular recognition methods that do not use a generative model.
A world survey of artificial brain projects, Part I: Large-scale brain simulations Driven by rapid ongoing advances in computer hardware, neuroscience and computer science, Artificial Brain research and development are blossoming. This article constitutes the first half of a two-part world survey of artificial brain projects: this part dealing with large-scale brain simulations, and the second part with biologically inspired cognitive architectures (BICAs). The large-scale brain simulations we consider in depth here include those by Markram, Modha, Boahen, Horwitz, Edelman, Izhikevich, and Just. As well as reviewing the particulars of these simulation projects, we position them in a broader perspective, comparing at the different underlying definitions of the concept of ''simulation,'' noting that in many ways the projects are modeling neurosystems at different levels as well as using different methodologies.
MicroGP—An Evolutionary Assembly Program Generator This paper describes 驴GP, an evolutionary approach for generating assembly programs tuned for a specific microprocessor. The approach is based on three clearly separated blocks: an evolutionary core, an instruction library and an external evaluator. The evolutionary core conducts adaptive population-based search. The instruction library is used to map individuals to valid assembly language programs. The external evaluator simulates the assembly program, providing the necessary feedback to the evolutionary core. 驴GP has some distinctive features that allow its use in specific contexts. This paper focuses on one such context: test program generation for design validation of microprocessors. Reported results show 驴GP being used to validate a complex 5-stage pipelined microprocessor. Its induced test programs outperform an exhaustive functional test and an instruction randomizer, showing that engineers are able to automatically obtain high-quality test programs.
Practical Timing Side Channel Attacks against Kernel Space ASLR Due to the prevalence of control-flow hijacking attacks, a wide variety of defense methods to protect both user space and kernel space code have been developed in the past years. A few examples that have received widespread adoption include stack canaries, non-executable memory, and Address Space Layout Randomization (ASLR). When implemented correctly (i.e., a given system fully supports these protection methods and no information leak exists), the attack surface is significantly reduced and typical exploitation strategies are severely thwarted. All modern desktop and server operating systems support these techniques and ASLR has also been added to different mobile operating systems recently. In this paper, we study the limitations of kernel space ASLR against a local attacker with restricted privileges. We show that an adversary can implement a generic side channel attack against the memory management system to deduce information about the privileged address space layout. Our approach is based on the intrinsic property that the different caches are shared resources on computer systems. We introduce three implementations of our methodology and show that our attacks are feasible on four different x86-based CPUs (both 32- and 64-bit architectures) and also applicable to virtual machines. As a result, we can successfully circumvent kernel space ASLR on current operating systems. Furthermore, we also discuss mitigation strategies against our attacks, and propose and implement a defense solution with negligible performance overhead.
ΣΔ ADC with fractional sample rate conversion for software defined radio receiver.
A Hybrid 1<sup>st</sup>/2<sup>nd</sup>-Order VCO-Based CTDSM With Rail-to-Rail Artifact Tolerance for Bidirectional Neural Interface Bi-directional brain machine interfaces enable simultaneous brain activity monitoring and neural modulation. However stimulation artifact can saturate instrumentation front-end, while concurrent on-site recording is needed. This brief presents a voltage controlled-oscillator (VCO) based continuous-time <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\rm \Delta \Sigma $ </tex-math></inline-formula> modulator (CTDSM) with rail-to-rail input range and fast artifact tracking. A hybrid <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}/2^{nd}$ </tex-math></inline-formula> -order loop is designed to achieve high dynamic range (DR) and large input range. Stimulation artifact is detected by a phase counter and compensated by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1^{st}$ </tex-math></inline-formula> -order loop. The residue signal is digitized by the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2^{nd}$ </tex-math></inline-formula> -order loop for high precision. Redundancy between the two loops is implemented as feedback capacitors elements with non-binary ratio to guarantee feedback stability and linearity. Fabricated in a 55-nm CMOS process, the prototype achieves 65.7dB SNDR across 10 kHz bandwidth with a full scale of 200 mV <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">pp</sub> . And a ±1.2 V input range is achieved to suppress artifacts. Saline based experiment with simultaneous stimulation and recording demonstrates that the implemented system can track and tolerate rail-to-rail stimulation artifact within 30 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{s}$ </tex-math></inline-formula> while small neural signal can be continuously monitored.
1.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
0
0
A 0.0285-mm<sup>2</sup> 0.68-pJ/bit Single-Loop Full-Rate Bang-Bang CDR Without Reference and Separate FD Pulling Off an 8.2-Gb/s/μs Acquisition Speed of the PAM-4 Input in 28-nm CMOS This article reports a single-loop full-rate bang-bang clock and data recovery (BBCDR) circuit supporting a four-level pulse amplitude modulation (PAM-4) pattern. We eliminate both the reference and the separate frequency detector (FD) by deliberately adding two fixed strobe points in the bang-bang phase detector (BBPD) curve via a clock-selection scheme. As such, we can achieve a wide frequency-capture range in a single-sided FD polarity. The BBPD also incorporates a hybrid control circuit to automate the frequency acquisition over a wide frequency range. Prototyped in a 28-nm CMOS, the proposed BBCDR occupies a tiny area of 0.0285 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> and exhibits a 23-to-29-Gb/s capture range. The acquisition speed [8.2 Gb/s/ <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{s}$ </tex-math></inline-formula> ] and energy efficiency (0.68 pJ/bit) compare favorably with the state of the art.
A 3.3-mW 25.2-to-29.4-GHz Current-Reuse VCO Using a Single-Turn Multi-Tap Inductor and Differential-Only Switched-Capacitor Arrays With a 187.6-dBc/Hz FOM A millimeter-wave current-reuse voltage-controlled oscillator (VCO) features a single-turn multi-tap inductor and two separate differential-only switched-capacitor arrays to improve the power efficiency and phase noise (PN). Specifically, a single-branch complementary VCO topology, in conjunction with a multi-resonant Resistor-Inductor-Capacitor-Mutual inductance (RLCM) tank, allows sharing the bias current and reshaping the impulse-sensitivity-function. The latter is based on an area- efficient RLCM tank to concurrently generate two high quality- factor differential-mode resonances at the fundamental and 2 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">nd</sup> - harmonic oscillation frequencies. Fabricated in 65-nm CMOS technology, our VCO at 27.7 GHz shows a PN of -109.91-dBc/Hz at 1-MHz offset (after on-chip divider-by-2), while consuming just 3.3 mW at a 1.1-V supply. It corresponds to a Figure-of-Merit (FOM) of 187.6 dBc/Hz. The frequency tuning range is 15.3% (25.2 to 29.4 GHz) and the core area is 0.116 mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> .
Bird'S-Eye View Of Analog And Mixed-Signal Chips For The 21st Century The Internet of Everything (IoE), clearly a 21st century's technology, brilliantly plays with digital data obtained from analog sources, bringing together two different realities, the analog (physical/real), and the digital (cyber/virtual) worlds. Then, with the boundaries of IoE still analog in nature, the required functions at the interface involve sensing, measuring, filtering, converting, processing, and connecting, which imply that the analog layer governs the entire system in terms of accuracy and precision. Furthermore, such interface integrates several analog and mixed-signal subsystems that comprise mainly signal transmission and reception, frequency generation, energy harvesting, data, and power conversion. This paper sets forth a state-of-the-art design perspective of some of the most critical building blocks used in the analog/digital interface, covering wireless cellular transceivers, millimeter-wave frequency generators, energy harvesting interfaces, plus, data and power converters, that exhibit high quality performance achieved through low-power consumption, high energy-efficiency, and high speed.
A 3x9 Gb/s Shared, All-Digital CDR for High-Speed, High-Density I/O. This paper presents a novel all-digital CDR scheme in 90 nm CMOS. Two independently adjustable clock phases are generated from a delay line calibrated to 2 UI. One clock phase is placed in the middle of the eye to recover the data (“data clock”) and the other is swept across the delay line (“search clock”). As the search clock is swept, its samples are compared against the data samples to generate...
A 400 Mb/s∼2.5 Gb/s Referenceless CDR IC Using Intrinsic Frequency Detection Capability of Half-Rate Linear Phase Detector. A 400 Mb/s ~2.5 Gb/s referenceless clock and data recovery (CDR) IC is presented. This paper shows that the half-rate linear phase detector (PD) has not only phase detection capability but also single-sided frequency detection capability in itself. By using this intrinsic frequency detection capability of the half-rate linear PD, a CDR can be implemented in the single loop architecture without bot...
Jitter-Power Trade-Offs in PLLs As new applications impose jitter values in the range of a few tens of femtoseconds, the design of phase-locked loops faces daunting challenges. This paper derives basic relations between the tolerable jitter and the power consumption, predicting severe issues as jitters below 10 fs are sought. The results are also applied to the sampling clocks in analog-to-digital converters and suggest that clock generation may consume a greater power than the converter itself.
A 64 Gb/s Low-Power Transceiver for Short-Reach PAM-4 Electrical Links in 28-nm FDSOI CMOS A four-level pulse-amplitude modulation (PAM-4) transceiver operating up to 64 Gb/s in 28-nm CMOS fully depleted silicon-on-insulator (FDSOI) for short-reach electrical links is presented. The receiver equalization relies on a flexible continuous-time linear equalizer (CTLE), providing a very accurate channel inversion through a transfer function that can be optimally adapted at low frequency, mid-frequency, and high frequency independently. The CTLE meets the performance requirements of CEI-56G-VSR without requiring the decision feedback equalizer (DFE) implementation. As a result, timing constraints for comparators in data and edge sampling paths may be relaxed by using track-and-hold (T&H) stages, saving power consumption. At the maximum speed, the receiver draws 180 mA from 1-V supply, corresponding to 2.8 mW/Gb/s only. The transmitter embeds a flexible feed-forward equalizer (FFE) which can be reconfigured to comply with legacy standards. A comparison between current-mode (CM) and voltage-mode (VM) TX drivers is proposed, proving through experiments that the latter yields larger PAM-4 eye openings, thanks to the intrinsically higher speed. The full transceiver (TX, RX, and clock generation) operates from 16 to 64 Gb/s in PAM-4 and 8 to 32 Gb/s in non-return-to-zero (NRZ), and supports 2 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\times $ </tex-math></inline-formula> and 4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\times $ </tex-math></inline-formula> oversampling to reduce data rate down to 2 Gb/s. A TX-to-RX link at 64 Gb/s, across a 16.8-dB-loss channel, reaches 10 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">−12</sup> minimum bit-error rate (BER) and 0.19-UI horizontal eye opening at BER = 10 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">−6</sup> , with 5.02 mW/Gb/s power dissipation.
Class-C Harmonic CMOS VCOs, With a General Result on Phase Noise A harmonic oscillator topology displaying an improved phase noise performance is introduced in this paper. Exploiting the advantages yielded by operating the core transistors in class-C, a theoretical 3.9 dB phase noise improvement compared to the standard differential-pair LC-tank oscillator is achieved for the same current consumption. Further benefits derive from the natural rejection of the tail bias current noise, and from the absence of parasitic nodes sensitive to stray capacitances. Closed-form phase-noise equations obtained from a rigorous time-variant circuit analysis are presented, as well as a time-variant study of the stability of the oscillation amplitude, resulting in simple guidelines for a reliable design. Furthermore, the analysis of phase noise is extended to encompass a general harmonic oscillator, showing that all phase noise relations previously obtained for specific LC oscillator topologies are special cases of a very general and remarkably simple result.
Broadband MIMO-OFDM Wireless Communications Orthogonal frequency division multiplexing (OFDM) is a popular method for high data rate wireless transmission. OFDM may be combined with antenna arrays at the transmitter and receiver to increase the diversity gain and/or to enhance the system capacity on time-varying and frequency-selective channels, resulting in a multiple-input multiple-output (MIMO) configuration. The paper explores various p...
Design of Symmetrical Class E Power Amplifiers for Very Low Harmonic-Content Applications Class E power amplifier circuits are very suitable for high efficiency power amplification applications in the radio-frequency and microwave ranges. However, due to the inherent asymmetrical driving arrangement, they suffer significant harmonic contents in the output voltage and current, and usually require substantial design efforts in achieving the desired load matching networks for applications requiring very low harmonic contents. In this paper, the design of a Class E power amplifier with resonant tank being symmetrically driven by two Class E circuits is studied. The symmetrical Class E circuit, under nominal operating conditions, has extremely low harmonic distortions, and the design of the impedance matching network for harmonic filtering becomes less critical. Practical steady-state design equations for Class E operation are derived and graphically presented. Experimental circuits are constructed for distortion evaluation. It has been found that this circuit offers total harmonic distortions which are about an order of magnitude lower than those of the conventional Class E power amplifier.
Reconstruction of nonuniformly sampled bandlimited signals by means of digital fractional delay filters This paper considers the problem of reconstructing a class of nonuniformly sampled bandlimited signals of which a special case occurs in, e.g., time-interleaved analog-to-digital converter (ADC) systems due to time-skew errors. To this end, we propose a synthesis system composed or digital fractional delay filters. The overall system (i.e., nonuniform sampling and the proposed synthesis system) can be viewed as a generalization of time-interleaved ADC systems to which the former reduces as a special case. Compared with existing reconstruction techniques, our method has major advantages from an implementation point of view. To be precise, (1) we can perform the reconstruction as well as desired (in a certain sense) by properly designing the digital fractional delay filters, and (2) if properly implemented, the fractional delay filters need not be redesigned in case the time skews are changed. The price to pay for these attractive features is that we need to use a slight oversampling. It should be stressed, however, that the oversampling factor is less than two as compared with the Nyquist rate. The paper includes error and quantization noise analysis. The former is useful in the analysis of the quantization noise and when designing practical fractional delay filters approximating the ideal filters.
Communication-efficient failure detection and consensus in omission environments Failure detectors have been shown to be a very useful mechanism to solve the consensus problem in the crash failure model, for which a number of communication-efficient algorithms have been proposed. In this paper we deal with the definition, implementation and use of communication-efficient failure detectors in the general omission failure model, where processes can fail by crashing and by omitting messages when sending and/or receiving. We first define a new failure detector class for this model in terms of completeness and accuracy properties. Then we propose an algorithm that implements a failure detector of the proposed class in a communication-efficient way, in the sense that only a linear number of links are used to send messages forever. We also explain how the well-known consensus algorithm of Chandra and Toueg can be adapted in order to use the proposed failure detector.
The rise of "big data" on cloud computing: Review and open research issues. Cloud computing is a powerful technology to perform massive-scale and complex computing. It eliminates the need to maintain expensive computing hardware, dedicated space, and software. Massive growth in the scale of data or big data generated through cloud computing has been observed. Addressing big data is a challenging and time-demanding task that requires a large computational infrastructure to ensure successful data processing and analysis. The rise of big data in cloud computing is reviewed in this study. The definition, characteristics, and classification of big data along with some discussions on cloud computing are introduced. The relationship between big data and cloud computing, big data storage systems, and Hadoop technology are also discussed. Furthermore, research challenges are investigated, with focus on scalability, availability, data integrity, data transformation, data quality, data heterogeneity, privacy, legal and regulatory issues, and governance. Lastly, open research issues that require substantial research efforts are summarized.
On-Chip Neural Data Compression Based On Compressed Sensing With Sparse Sensing Matrices. On-chip neural data compression is an enabling technique for wireless neural interfaces that suffer from insufficient bandwidth and power budgets to transmit the raw data. The data compression algorithm and its implementation should be power and area efficient and functionally reliable over different datasets. Compressed sensing is an emerging technique that has been applied to compress various ne...
1.071111
0.073333
0.073333
0.066667
0.066667
0.046667
0.022222
0.003016
0
0
0
0
0
0
Leader Election in Sparse Dynamic Networks with Churn We investigate the problem of electing a leader in a sparse but well-connected synchronous dynamic network in which up to a fraction of the nodes chosen adversarially can leave/join the network per time step. At this churn rate, all nodes in the network can be replaced by new nodes in a constant number of rounds. Moreover, the adversary can shield a fraction of the nodes (which may include the leader) by repeatedly churning their neighbourhood and thus hinder their communication with the rest of the network. However, empirical studies in peer-to-peer networks have shown that a significant fraction of the nodes are usually stable and well connected. It is, therefore, natural to take advantage of such stability and well-connectedness to establish a leader that can maintain good communication with rest of the nodes. Since the dynamics could change eventually, it is also essential to re-elect a new leader whenever the current leader has either left the network or is not well-connected with rest of the nodes. In such re-elections, care must be taken to avoid premature and spurious leader election resulting in more than one leader present in the network at the same time. We assume a broadcast based communication model in which each node can send up to O(log3 n) bits per round and is unaware of its receivers a priori. We present a randomized algorithm that can, in O(log n) rounds, detect and reach consensus about the health of the leader (i.e., whether it is able to maintain good communication with rest of the network). In the event that the network decides that the leader's ability to communicate is unhealthy, a new leader is re-elected in a further O(log2 n) rounds. Our running times hold with high probability, and, furthermore, we are guaranteed with high probability that there is at most one leader at any time.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Regulated Charge Pump With New Clocking Scheme for Smoothing the Charging Current in Low Voltage CMOS Process. A regulated cross-couple charge pump with new charging current smoothing technique is proposed and verified in a 0.18-μm 1.8-V/3.3-V CMOS process. The transient behaviors of 3-stage cross-couple charge pump and the expressions for the charging current are described in detail. The experiment results show that the charging current ripples are reduced by a factor of three through using the proposed n...
A 1-V-Input Switched-Capacitor Voltage Converter With Voltage-Reference-Free Pulse-Density Modulation. A 1-V-input 0.45-V-output switched-capacitor (SC) voltage converter with voltage-reference-free pulse-density modulation (VRF-PDM) is proposed. The all-digital VRF-PDM scheme improves the efficiency from 17% to 73% at 50- μA output current by reducing the pulse density and eliminating the voltage reference circuit. An output voltage trimming by the hot-carrier injection to a comparator and a perio...
Conductance Modulation Techniques in Switched-Capacitor DC-DC Converter for Maximum-Efficiency Tracking and Ripple Mitigation in 22 nm Tri-Gate CMOS Switch conductance modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, in 22 nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switch-size scaling scheme for maximum efficiency tracking across a wide range of voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures and, (ii) a simple active ripple mitigation technique that modulates the gate drive of select MOSFET switches effectively in all conversion modes. Efficiency improvements up to 15% are measured under low output voltage and load conditions. Load-independent output ripple of $ 50 mV is achieved, enabling reduced interleaving. Test chip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
An Energy-Efficient Graphics Processor in 14-nm Tri-Gate CMOS Featuring Integrated Voltage Regulators for Fine-Grain DVFS, Retentive Sleep, and <inline-formula> <tex-math notation="LaTeX">${V}_{\text{MIN}}$ </tex-math></inline-formula> Optimization Graphics workloads make highly dynamic use of resources such as execution units (EUs), and thus can benefit from fast, fine-grain dynamic voltage and frequency scaling (DVFS) and retentive sleep. This paper presents a 14-nm graphics processing unit (GPU) prototype with modified EUs which include an integrated voltage regulator (IVR). The IVR enables energy-efficient EU turbo operation, data retention, and <inline-formula xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$V_{\text {MIN}}$ </tex-math></inline-formula> optimization per EU. Silicon measurements show that IVR-enabled EU turbo operation offers up to 32% (average 29%) energy reduction at constant performance.
An Integrated DC-DC Converter with Segmented Frequency Modulation and Multiphase Co-Work Control for Fast Transient Recovery. This paper presents a fully integrated voltage-controlled oscillator (VCO)-based switched-capacitor (SC) 15-phase dc-dc converter in 65-nm CMOS. We propose two transient-enhancement techniques: segmented frequency modulation (SFM) and multiphase co-work (MCW) control to reduce the latency of the VCO-based control loop and shorten the SC dc-dc converter&#39;s transient response time. The SFM can improv...
Design of Single-Topology Continuously Scalable-Conversion-Ratio Switched- Capacitor DC–DC Converters This paper introduces a fundamentally different type of switched-capacitor dc–dc converter with large voltage swing flying capacitors, which is made efficient using advanced multiphasing soft charging. A single-capacitor topology is derived that behaves like a gyrator and achieves a continuously scalable-conversion-ratio with high efficiency. A circuit fabricated in a 28-nm CMOS process demonstrates the working principle of the presented topology and advances the state of the art by achieving a peak efficiency of 93%, and a single-topology 0.9–2.03-V output voltage range with more than 90% efficiency at an input voltage of 2 V.
Unreliable failure detectors for reliable distributed systems We introduce the concept of unreliable failure detectors and study how they can be used to solve Consensus in asynchronous systems with crash failures. We characterise unreliable failure detectors in terms of two properties—completeness and accuracy. We show that Consensus can be solved even with unreliable failure detectors that make an infinite number of mistakes, and determine which ones can be used to solve Consensus despite any number of crashes, and which ones require a majority of correct processes. We prove that Consensus and Atomic Broadcast are reducible to each other in asynchronous systems with crash failures; thus, the above results also apply to Atomic Broadcast. A companion paper shows that one of the failure detectors introduced here is the weakest failure detector for solving Consensus [Chandra et al. 1992].
Time-delay systems: an overview of some recent advances and open problems After presenting some motivations for the study of time-delay system, this paper recalls modifications (models, stability, structure) arising from the presence of the delay phenomenon. A brief overview of some control approaches is then provided, the sliding mode and time-delay controls in particular. Lastly, some open problems are discussed: the constructive use of the delayed inputs, the digital implementation of distributed delays, the control via the delay, and the handling of information related to the delay value.
Bayesian Network Classifiers Recent work in supervised learning has shown that a surprisinglysimple Bayesian classifier with strong assumptions of independence amongfeatures, called naive Bayes, is competitive withstate-of-the-art classifiers such as C4.5. This fact raises the question ofwhether a classifier with less restrictive assumptions can perform evenbetter. In this paper we evaluate approaches for inducing classifiers fromdata, based on the theory of learning Bayesian networks. These networks are factored representations ofprobability distributions that generalize the naive Bayesian classifier andexplicitly represent statements about independence. Among these approacheswe single out a method we call Tree Augmented Naive Bayes (TAN), which outperforms naive Bayes, yet at the same timemaintains the computational simplicity (no search involved) and robustnessthat characterize naive Bayes. We experimentally tested these approaches,using problems from the University of California at Irvine repository, and compared them to C4.5, naive Bayes, and wrapper methods for featureselection.
Time-optimal leader election in general networks This note presents a simple time-optimal distributed algorithm for electing a leader in a general network. For several important classes of networks this algorithm is also message-optimal and thus performs better than previous algorithms for the problem.
Bundled execution of recurring traces for energy-efficient general purpose processing Technology scaling has delivered on its promises of increasing device density on a single chip. However, the voltage scaling trend has failed to keep up, introducing tight power constraints on manufactured parts. In such a scenario, there is a need to incorporate energy-efficient processing resources that can enable more computation within the same power budget. Energy efficiency solutions in the past have typically relied on application specific hardware and accelerators. Unfortunately, these approaches do not extend to general purpose applications due to their irregular and diverse code base. Towards this end, we propose BERET, an energy-efficient co-processor that can be configured to benefit a wide range of applications. Our approach identifies recurring instruction sequences as phases of "temporal regularity" in a program's execution, and maps suitable ones to the BERET hardware, a three-stage pipeline with a bundled execution model. This judicious off-loading of program execution to a reduced-complexity hardware demonstrates significant savings on instruction fetch, decode and register file accesses energy. On average, BERET reduces energy consumption by a factor of 3-4X for the program regions selected across a range of general-purpose and media applications. The average energy savings for the entire application run was 35% over a single-issue in-order processor.
Fully Integrated CMOS Power Amplifier With Efficiency Enhancement at Power Back-Off This paper presents a new approach for power amplifier design using deep submicron CMOS technologies. A transformer based voltage combiner is proposed to combine power generated from several low-voltage CMOS amplifiers. Unlike other voltage combining transformers, the architecture presented in this paper provides greater flexibility to access and control the individual amplifiers in a voltage comb...
Understanding the regenerative comparator circuit The regenerative comparator circuit which lies at the heart of A/D conversion, slicer circuits, and memory sensing, is unstable, time-varying, nonlinear, and with multiple equilibria. That does not mean, as this paper shows, that it cannot be understood with simple equivalent circuits that reveal its dynamics completely, and enable it to be designed to specifications on static and dynamic offset and noise. The analysis is applied to the StrongArm latch.
Power Efficiency Comparison of Event-Driven and Fixed-Rate Signal Conversion and Compression for Biomedical Applications Energy-constrained biomedical recording systems need power-efficient data converters and good signal compression in order to meet the stringent power consumption requirements of many applications. In literature today, typically a SAR ADC in combination with digital compression is used. Recently, alternative event-driven sampling techniques have been proposed that incorporate compression in the ADC, such as level-crossing A/D conversion. This paper describes the power efficiency analysis of such level-crossing ADC (LCADC) and the traditional fixed-rate SAR ADC with simple compression. A model for the power consumption of the LCADC is derived, which is then compared to the power consumption of the SAR ADC with zero-order hold (ZOH) compression for multiple biosignals (ECG, EMG, EEG, and EAP). The LCADC is more power efficient than the SAR ADC up to a cross-over point in quantizer resolution (for example 8 bits for an EEG signal). This cross-over point decreases with the ratio of the maximum to average slope in the signal of the application. It also changes with the technology and design techniques used. The LCADC is thus suited for low to medium resolution applications. In addition, the event-driven operation of an LCADC results in fewer data to be transmitted in a system application. The event-driven LCADC without timer and with single-bit quantizer achieves a reduction in power consumption at system level of two orders of magnitude, an order of magnitude better than the SAR ADC with ZOH compression. At system level, the LCADC thus offers a big advantage over the SAR ADC.
1.2
0.2
0.2
0.2
0.2
0.066667
0
0
0
0
0
0
0
0
A 40-nm CMOS 7-b 32-GS/s SAR ADC With Background Channel Mismatch Calibration. This brief presents a 7-b 32-GS/s successive approximation register analog-to-digital converter (ADC) using a massive time-interleaving (TI) architecture. For low-skew multi-phase clocks, generation utilizing a delay-locked loop (DLL) phase-detector (PD) with a reduced offset is proposed to minimize skew between the clocks. Different clock path delays caused by distributed sub-ADCs over a large ar...
A 4-Bit, 1.6 GS/s Low Power Flash ADC, Based on Offset Calibration and Segmentation A low power 4-bit, 1.6 GS/s flash ADC is presented. A new power reduction technique which masks the unused blocks in a semi-pipeline chain of latches and encoders is introduced. The proposed circuit determines the unused blocks based on a pre-sensing of the signal. Moreover, a reference voltage generator with very low static power dissipation is used. Novel techniques to reduce the sensitivity to dynamic noise are proposed to suppress the noise effects on the reference generator. The proposed circuit reduces the power consumption by 20 percent compared to the conventional structure when a Nyquist rate OFDM signal is applied. The INL and DNL of the converter are smaller than 0.3 LSB after calibration. The converter offers 3.8 effective number of bits (ENOB) at 1.6 GS/s sampling rate with a low frequency input signal and more than 1.8 GHz effective resolution bandwidth (ERBW) at this sampling rate. The converter consumes mere 15.5 mW from a 1.8 V supply, yielding an FoM of 695 fJ/conversion.step and occupies 0.3 mm2 in a 0.18 μm standard CMOS process.
Integration of Array Antennas in Chip Package for 60-GHz Radios. This paper discusses the integration of array antennas in chip packages for highly integrated 60-GHz radios. First, we evaluate fixed-beam array antennas, showing that most of them suffer from feed network complexity and require sophisticated process techniques to achieve enhanced performance. We describe the grid array antenna and show that is a good choice for fixed-beam array antenna applicatio...
IEEE 802.11ad: directional 60 GHz communication for multi-Gigabit-per-second Wi-Fi [Invited Paper] With the ratification of the IEEE 802.11ad amendment to the 802.11 standard in December 2012, a major step has been taken to bring consumer wireless communication to the millimeter wave band. However, multi-gigabit-per-second throughput and small interference footprint come at the price of adverse signal propagation characteristics, and require a fundamental rethinking of Wi-Fi communication principles. This article describes the design assumptions taken into consideration for the IEEE 802.11ad standard and the novel techniques defined to overcome the challenges of mm-Wave communication. In particular, we study the transition from omnidirectional to highly directional communication and its impact on the design of IEEE 802.11ad.
A mm-Precise 60 GHz Transmitter in 40 nm CMOS for Discrete-Carrier Indoor Localization This paper presents a multicarrier 60 GHz transmitter for distance measurement (ranging) in an indoor wireless localization system, achieving mm-precision with high update rate. The architecture comprises a baseband subcarrier generator, an upconverter, and a power amplifier. There are three key innovations, all stemming from careful hardware-algorithm co-design: 1) efficient frequency planning of the 6 GHz-wide band; 2) power-efficient multicarrier signal generation by means of digital frequency divisions exploiting the phase-based time-of-arrival ranging algorithm; and 3) PAPR reduction to enable efficient operation of the power amplifier. By implementing these key techniques, 0.7–2.7 mm precision is achieved over 5 m measured distance with 5.4 s symbol duration. During operation, the core digital subcarrier generator generates 16 non-equidistant subcarriers from a 3 GHz input clock, while consuming an average power of 1.8 mW out of 0.9V supply. The upconverter and the power amplifier altogether consume around 127 mW. The total area of the transmitter is 1.1 mm . The chip is fabricated in a 40nm general purpose CMOS process.
Time-Interleaved SAR ADC with Background Timing-Skew Calibration for UWB Wireless Communication in IoT Systems. Ultra-wideband (UWB) wireless communication is prospering as a powerful partner of the Internet-of-things (IoT). Due to the ongoing development of UWB wireless communications, the demand for high-speed and medium resolution analog-to-digital converters (ADCs) continues to grow. The successive approximation register (SAR) ADCs are the most powerful candidate to meet these demands, attracting both industries and academia. In particular, recent time-interleaved SAR ADCs show that multi-giga sample per second (GS/s) can be achieved by overcoming the challenges of high-speed implementation of existing SAR ADCs. However, there are still critical issues that need to be addressed before the time-interleaved SAR ADCs can be applied in real commercial applications. The most well-known problem is that the time-interleaved SAR ADC architecture requires multiple sub-ADCs, and the mismatches between these sub-ADCs can significantly degrade overall ADC performance. And one of the most difficult mismatches to solve is the sampling timing skew. Recently, research to solve this timing-skew problem has been intensively studied. In this paper, we focus on the cutting-edge timing-skew calibration technique using a window detector. Based on the pros and cons analysis of the existing techniques, we come up with an idea that increases the benefits of the window detector-based timing-skew calibration techniques and minimizes the power and area overheads. Finally, through the continuous development of this idea, we propose a timing-skew calibration technique using a comparator offset-based window detector. To demonstrate the effectiveness of the proposed technique, intensive works were performed, including the design of a 7-bit, 2.5 GS/s 5-channel time-interleaved SAR ADC and various simulations, and the results prove excellent efficacy of signal-to-noise and distortion ratio (SNDR) and spurious-free dynamic range (SFDR) of 40.79 dB and 48.97 dB at Nyquist frequency, respectively, while the proposed window detector occupies only 6.5% of the total active area, and consumes 11% of the total power.
Design Considerations for Interleaved ADCs. Interleaving can relax the power-speed tradeoffs of analog-to-digital converters and reduce their metastability error rate while increasing the input capacitance. This paper quantifies the benefits and derives an upper bound on the performance by considering kT/C noise and slewing requirements of the circuit driving the system. A frequency-domain analysis of interleaved converters is also presente...
Achievable rates in cognitive radio channels Cognitive radio promises a low-cost, highly flexible alternative to the classic single-frequency band, single-protocol wireless device. By sensing and adapting to its environment, such a device is able to fill voids in the wireless spectrum and can dramatically increase spectral efficiency. In this paper, the cognitive radio channel is defined as a two-sender, two-receiver interference channel in which sender 2 obtains the encoded message sender 1 plans to transmit. We consider two cases: in the genie-aided cognitive radio channel, sender 2 is noncausally presented the data to be transmitted by sender 1 while in the causal cognitive radio channel, the data is obtained causally. The cognitive radio at sender 2 may then choose to transmit simultaneously over the same channel, as opposed to waiting for an idle channel as is traditional for a cognitive radio. Our main result is the development of an achievable region which combines Gel'fand-Pinkser coding with an achievable region construction for the interference channel. In the additive Gaussian noise case, this resembles dirty-paper coding, a technique used in the computation of the capacity of the Gaussian multiple-input multiple-output (MIMO) broadcast channel. Numerical evaluation of the region in the Gaussian noise case is performed, and compared to an inner bound, the interference channel, and an outer bound, a modified Gaussian MIMO broadcast channel. Results are also extended to the case in which the message is causally obtained.
Reaching Agreement in the Presence of Faults The problem addressed here concerns a set of isolated processors, some unknown subset of which may be faulty, that communicate only by means of two-party messages. Each nonfaulty processor has a private value of information that must be communicated to each other nonfaulty processor. Nonfaulty processors always communicate honestly, whereas faulty processors may lie. The problem is to devise an algorithm in which processors communicate their own values and relay values received from others that allows each nonfaulty processor to infer a value for each other processor. The value inferred for a nonfaulty processor must be that processor's private value, and the value inferred for a faulty one must be consistent with the corresponding value inferred by each other nonfaulty processor.It is shown that the problem is solvable for, and only for, n ≥ 3m + 1, where m is the number of faulty processors and n is the total number. It is also shown that if faulty processors can refuse to pass on information but cannot falsely relay information, the problem is solvable for arbitrary n ≥ m ≥ 0. This weaker assumption can be approximated in practice using cryptographic methods.
Software-defined radio receiver: dream to reality This article describes a fully integrated 90 nm CMOS software-defined radio receiver operating in the 800 MHz to 5 GHz band. Unlike the classical SDR paradigm, which digitizes the whole spectrum uniformly, this receiver acts as a signal conditioner for the analog-to-digital converters, emphasizing only the wanted channel. Thus, the ADCs operate with modest resolution and sample rate, consuming low power. This approach makes portable SDR a reality
A framework for security on NoC technologies Multiple heterogeneous processor cores, memory cores and application specific IP cores integrated in a communication network, also known as networks on chips (NoCs), will handle a large number of applications including security. Although NoCs offer more resistance to bus probing attacks, power/EM attacks and network snooping attacks are relevant. For the first time, a framework for security on NoC at both the network level (or transport layer) and at the core level (or application layer) is proposed. At the network level, each IP core has a security wrapper and a key-keeper core is included in the NoC, protecting encrypted private and public keys. Using this framework, unencrypted keys are prevented from leaving the cores and NoC. This is crucial to prevent untrusted software on or off the NoC from gaining access to keys. At the core level (application layer) the security framework is illustrated with software modification for resistance against power attacks with extremely low overheads in energy. With the emergence of secure IP cores in the market and nanometer technologies, a security framework for designing NoCs is crucial for supporting future wireless Internet enabled devices.
Architectural Evolution of Integrated M-Phase High-Q Bandpass Filters -phase bandpass filters (BPFs) are analyzed, and variations of the structure are proposed. For values of that are integer multiples of 4, the conventional -phase BPF structure is modified to take complex baseband impedances and frequency-translate their complex impedance response to the local oscillator frequency. Also, it is demonstrated how the -phase BPF can be modified to implement a high quality factor (Q) image-rejection BPF with quadrature RF inputs. In addition, we present high-Q BPFs whose center frequencies are equal to the sum or difference of the RF and IF (intermediate frequency) clocks. Such filters can be useful in heterodyne receiver architectures.
NDC: Analyzing the impact of 3D-stacked memory+logic devices on MapReduce workloads While Processing-in-Memory has been investigated for decades, it has not been embraced commercially. A number of emerging technologies have renewed interest in this topic. In particular, the emergence of 3D stacking and the imminent release of Micron's Hybrid Memory Cube device have made it more practical to move computation near memory. However, the literature is missing a detailed analysis of a killer application that can leverage a Near Data Computing (NDC) architecture. This paper focuses on in-memory MapReduce workloads that are commercially important and are especially suitable for NDC because of their embarrassing parallelism and largely localized memory accesses. The NDC architecture incorporates several simple processing cores on a separate, non-memory die in a 3D-stacked memory package; these cores can perform Map operations with efficient memory access and without hitting the bandwidth wall. This paper describes and evaluates a number of key elements necessary in realizing efficient NDC operation: (i) low-EPI cores, (ii) long daisy chains of memory devices, (iii) the dynamic activation of cores and SerDes links. Compared to a baseline that is heavily optimized for MapReduce execution, the NDC design yields up to 15X reduction in execution time and 18X reduction in system energy.
A 1V 3.5 μW Bio-AFE With Chopper-Capacitor-Chopper Integrator-Based DSL and Low Power GM-C Filter This brief presents a low-noise, low-power bio-signal acquisition analog front-end (Bio-AFE). It mainly includes a capacitively coupled chopper-stabilized instrumentation amplifier (CCIA), a programmable gain amplifier (PGA), a low-pass filter (LPF), and a successive approximation analog to digital converter (SAR ADC). A chopper-capacitor-chopper integrator based DC servo loop (C3IB-DSL...
1.1
0.1
0.1
0.1
0.1
0.1
0.025
0
0
0
0
0
0
0
A Novel Nested Q-Learning Method to Tackle Time-Constrained Competitive Influence Maximization. Time plays a critical role in competitive influence maximization. Companies aim to promote their products before certain events, such as Christmas Eve or music concerts, to gain more benefit under competitions from other companies. Besides, these companies have a limited budget to spend on these product promotions. Therefore, in this paper, we examine a time-constrained competitive influence maximization where the parties wish to maximize their profits before the respective deadlines. Besides, the parties need to determine how to select the seed nodes and when to initiate information propagation in the network, such that the decision results in the optimal reward given the time and the budget constraint. To this end, we propose a novel reinforcement learning-based framework named seed-combination and seed-selection that is built on a nested Q-learning (NSQ) algorithm. This way, we can derive the optimal in both budget allocation and node selection that results in the maximum profit. In evaluating the proposed model, we consider the scenarios when the competitors' strategy is known, unknown, and not available for training. The results show that the proposed NSQ algorithm could improve the rewards by up to 50% compared with the state-of-the-art algorithm, STORM-Q.
Enhancing peer-to-peer content discovery techniques over mobile ad hoc networks Content dissemination over mobile ad hoc networks (MANETs) is usually performed using peer-to-peer (P2P) networks due to its increased resiliency and efficiency when compared to client-server approaches. P2P networks are usually divided into two types, structured and unstructured, based on their content discovery strategy. Unstructured networks use controlled flooding, while structured networks use distributed indexes. This article evaluates the performance of these two approaches over MANETs and proposes modifications to improve their performance. Results show that unstructured protocols are extremely resilient, however they are not scalable and present high energy consumption and delay. Structured protocols are more energy-efficient, however they have a poor performance in dynamic environments due to the frequent loss of query messages. Based on those observations, we employ selective forwarding to decrease the bandwidth consumption in unstructured networks, and introduce redundant query messages in structured P2P networks to increase their success ratio.
Reducing query overhead through route learning in unstructured peer-to-peer network In unstructured peer-to-peer networks, such as Gnutella, peers propagate query messages towards the resource holders by flooding them through the network. This is, however, a costly operation since it consumes node and link resources excessively and often unnecessarily. There is no reason, for example, for a peer to receive a query message if the peer has no matching resource or is not on the path to a peer holding a matching resource. In this paper, we present a solution to this problem, which we call Route Learning, aiming to reduce query traffic in unstructured peer-to-peer networks. In Route Learning, peers try to identify the most likely neighbors through which replies can be obtained to submitted queries. In this way, a query is forwarded only to a subset of the neighbors of a peer, or it is dropped if no neighbor, likely to reply, is found. The scheme also has mechanisms to cope with variations in user submitted queries, like changes in the keywords. The scheme can also evaluate the route for a query for which it is not trained. We show through simulation results that when compared to a pure flooding based querying approach, our scheme reduces bandwidth overhead significantly without sacrificing user satisfaction.
A Trusted Routing Scheme Using Blockchain and Reinforcement Learning for Wireless Sensor Networks. A trusted routing scheme is very important to ensure the routing security and efficiency of wireless sensor networks (WSNs). There are a lot of studies on improving the trustworthiness between routing nodes, using cryptographic systems, trust management, or centralized routing decisions, etc. However, most of the routing schemes are difficult to achieve in actual situations as it is difficult to dynamically identify the untrusted behaviors of routing nodes. Meanwhile, there is still no effective way to prevent malicious node attacks. In view of these problems, this paper proposes a trusted routing scheme using blockchain and reinforcement learning to improve the routing security and efficiency for WSNs. The feasible routing scheme is given for obtaining routing information of routing nodes on the blockchain, which makes the routing information traceable and impossible to tamper with. The reinforcement learning model is used to help routing nodes dynamically select more trusted and efficient routing links. From the experimental results, we can find that even in the routing environment with 50% malicious nodes, our routing scheme still has a good delay performance compared with other routing algorithms. The performance indicators such as energy consumption and throughput also show that our scheme is feasible and effective.
Decentralized Multi-Agent Reinforcement Learning With Networked Agents: Recent Advances Multi-agent reinforcement learning (MARL) has long been a significant research topic in both machine learning and control systems. Recent development of (single-agent) deep reinforcement learning has created a resurgence of interest in developing new MARL algorithms, especially those founded on theoretical analysis. In this paper, we review recent advances on a sub-area of this topic: decentralized MARL with networked agents. In this scenario, multiple agents perform sequential decision-making in a common environment, and without the coordination of any central controller, while being allowed to exchange information with their neighbors over a communication network. Such a setting finds broad applications in the control and operation of robots, unmanned vehicles, mobile sensor networks, and the smart grid. This review covers several of our research endeavors in this direction, as well as progress made by other researchers along the line. We hope that this review promotes additional research efforts in this exciting yet challenging area.
Chord: a scalable peer-to-peer lookup protocol for internet applications A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.
Randomized algorithms This text by two well-known experts in the field presents the basic concepts in the design and analysis of randomized algorithms at a level accessible to beginning graduate students, professionals and researchers.
A Formal Basis for the Heuristic Determination of Minimum Cost Paths Although the problem of determining the minimum cost path through a graph arises naturally in a number of interesting applications, there has been no underlying theory to guide the development of efficient search procedures. Moreover, there is no adequate conceptual framework within which the various ad hoc search strategies proposed to date can be compared. This paper describes how heuristic information from the problem domain can be incorporated into a formal mathematical theory of graph searching and demonstrates an optimality property of a class of search strategies.
Consensus problems in networks of agents with switching topology and time-delays. In this paper, we discuss consensus problems for a network of dynamic agents with flxed and switching topologies. We analyze three cases: i) networks with switching topology and no time-delays, ii) networks with flxed topology and communication time-delays, and iii) max-consensus problems (or leader determination) for groups of discrete-time agents. In each case, we introduce a linear/nonlinear consensus protocol and provide convergence analysis for the proposed distributed algorithm. Moreover, we establish a connection between the Fiedler eigenvalue of the information ∞ow in a network (i.e. algebraic connectivity of the network) and the negotiation speed (or performance) of the corresponding agreement protocol. It turns out that balanced digraphs play an important role in addressing average-consensus problems. We intro- duce disagreement functions that play the role of Lyapunov functions in convergence analysis of consensus protocols. A distinctive feature of this work is to address consen- sus problems for networks with directed information ∞ow. We provide analytical tools that rely on algebraic graph theory, matrix theory, and control theory. Simulations are provided that demonstrate the efiectiveness of our theoretical results.
Gossip-based aggregation in large dynamic networks As computer networks increase in size, become more heterogeneous and span greater geographic distances, applications must be designed to cope with the very large scale, poor reliability, and often, with the extreme dynamism of the underlying network. Aggregation is a key functional building block for such applications: it refers to a set of functions that provide components of a distributed system access to global information including network size, average load, average uptime, location and description of hotspots, and so on. Local access to global information is often very useful, if not indispensable for building applications that are robust and adaptive. For example, in an industrial control application, some aggregate value reaching a threshold may trigger the execution of certain actions; a distributed storage system will want to know the total available free space; load-balancing protocols may benefit from knowing the target average load so as to minimize the load they transfer. We propose a gossip-based protocol for computing aggregate values over network components in a fully decentralized fashion. The class of aggregate functions we can compute is very broad and includes many useful special cases such as counting, averages, sums, products, and extremal values. The protocol is suitable for extremely large and highly dynamic systems due to its proactive structure---all nodes receive the aggregate value continuously, thus being able to track any changes in the system. The protocol is also extremely lightweight, making it suitable for many distributed applications including peer-to-peer and grid computing systems. We demonstrate the efficiency and robustness of our gossip-based protocol both theoretically and experimentally under a variety of scenarios including node and communication failures.
On receding horizon feedback control Receding horizon feedback control (RHFC) was originally introduced as an easy method for designing stable state-feedback controllers for linear systems. Here those results are generalized to the control of nonlinear autonomous systems, and we develop a performance index which is minimized by the RHFC (inverse optimal control problem). Previous results for linear systems have shown that desirable nonlinear controllers can be developed by making the RHFC horizon distance a function of the state. That functional dependence was implicit and difficult to implement on-line. Here we develop similar controllers for which the horizon distance is an easily computed explicit function of the state.
Cross-layer sensors for green cognitive radio. Green cognitive radio is a cognitive radio (CR) that is aware of sustainable development issues and deals with an additional constraint as regards the decision-making function of the cognitive cycle. In this paper, it is explained how the sensors distributed throughout the different layers of our CR model could help on taking the best decision in order to best contribute to sustainable development.
20.3 A feedforward controlled on-chip switched-capacitor voltage regulator delivering 10W in 32nm SOI CMOS On-chip (or fully integrated) switched-capacitor (SC) voltage regulators (SCVR) have recently received a lot of attention due to their ease of monolithic integration. The use of deep trench capacitors can lead to SCVR implementations that simultaneously achieve high efficiency, high power density, and fast response time. For the application of granular power distribution of many-core microprocessor systems, the on-chip SCVR must maintain an output voltage above a certain minimum level Uout, min in order for the microprocessor core to meet setup time requirements. Following a transient load change, the output voltage typically exhibits a droop due to parasitic inductances and resistances in the power distribution network. Therefore, the steady-state output voltage is kept high enough to ensure VOUT >Vout, min at all times, thereby introducing an output voltage overhead that leads to increased system power consumption. The output voltage droop can be reduced by implementing fast regulation and a sufficient amount of on-chip decoupling capacitance. However, a large amount of on-chip decoupling capacitance is needed to significantly reduce the droop, and it becomes impractical to implement owing to the large chip area overhead required. This paper presents a feedforward control scheme that significantly reduces the output voltage droop in the presence of a large input voltage droop following a transient event. This in turn reduces the required output voltage overhead and may lead to significant overall system power savings.
An Energy-Efficient SAR ADC With Event-Triggered Error Correction This brief presents an energy-efficient fully differential 10-bit successive approximation register (SAR) analog-to-digital converter (ADC) with a resolution of 10 bits and a sampling rate of 320 kS/s. The optimal capacitor split and bypass number is analyzed to achieve the highest switching energy efficiency. The common-mode voltage level remains constant during the MSB-capacitor switching cycles. To minimize nonlinearity due to charge averaging voltage offset or DAC array mismatch, an event-triggered error correction method is employed as a redundant cycle for detecting digital code errors within 1 least significant bit (LSB). A test chip was fabricated using the 180-nm CMOS process and occupied a 0.0564-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> core area. Under a regular 0.65-V supply voltage, the ADC achieved an effective number of bits of 9.61 bits and a figure of merit (FOM) of 6.38 fJ/conversion step, with 1.6- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${ \mu }\text{W}$ </tex-math></inline-formula> power dissipation for a low-frequency input. The measured differential and integral nonlinearity results are within 0.30 LSB and 0.43 LSB, respectively.
1.2
0.2
0.2
0.2
0.2
0.007692
0
0
0
0
0
0
0
0
A 16-Channel CMOS Chopper-Stabilized Analog Front-End ECoG Acquisition Circuit for a Closed-Loop Epileptic Seizure Control System. In this paper, a 16-channel analog front-end (AFE) electrocorticography signal acquisition circuit for a closed-loop seizure control system is presented. It is composed of 16 input protection circuits, 16 auto-reset chopper-stabilized capacitive-coupled instrumentation amplifiers (AR-CSCCIA) with bandpass filters, 16 programmable transconductance gain amplifiers, a multiplexer, a transimpedance am...
Theory and Implementation of an Analog-to-Information Converter using Random Demodulation The new theory of compressive sensing enables direct analog-to-information conversion of compressible signals at sub-Nyquist acquisition rates. The authors develop new theory, algorithms, performance bounds, and a prototype implementation for an analog-to-information converter based on random demodulation. The architecture is particularly apropos for wideband signals that are sparse in the time-frequency plane. End-to-end simulations of a complete transistor-level implementation prove the concept under the effect of circuit nonidealities.
Ultra-High Input Impedance, Low Noise Integrated Amplifier for Noncontact Biopotential Sensing Noncontact electrocardiogram/electroencephalogram/electromyogram electrodes, which operate primarily through capacitive coupling, have been extensively studied for unobtrusive physiological monitoring. Previous implementations using discrete off-the-shelf amplifiers have been encumbered by the need for manually tuned input capacitance neutralization networks and complex dc-biasing schemes. We have designed and fabricated a custom integrated noncontact sensor front-end amplifier that fully bootstraps internal and external parasitic impedances. DC stability without the need for external large valued resistances is ensured by an ac bootstrapped, low-leakage, on-chip biasing network. The amplifier achieves, without neutralization, input impedance of 60 fF 50 T , input referred noise of 0.05 fA/ and 200 nV/ at 1 Hz, and power consumption of 1.5 A per channel at 3.3 V supply voltage. Stable frequency response is demonstrated below 0.05 Hz with electrode coupling capacitances as low as 0.5 pF.
A high input impedance low-noise instrumentaion amplifier with JFET input This paper presents a high input impedance instrumentation amplifier with low-noise low-power operation. JFET input-pair is employed instead of CMOS to significantly reduce the flicker noise. This amplifier features high input impedance (15.3 GΩ∥1.39 pF) by using current feedback technique and JFET input. This amplifier has a mid-band gain of 39.9 dB, and draws 3.65 μA from a 2.8-V supply and exhibits an input-referred noise of 3.81 μVrms integrated from 10 mHz to 100 kHz, corresponding to a noise efficiency factor (NEF) of 3.23.
A 12-Bit 20-kS/s 640-nW SAR ADC With a VCDL-Based Open-Loop Time-Domain Comparator This brief presents a 12-bit ultra-low-power asynchronous successive approximation register (SAR) analog-to-digital converter (ADC). A voltage-controlled delay line (VCDL) based open-loop time-domain comparator is proposed and analyzed, achieving low noise and ultra-low power performance. By employing the mixed switching scheme, the segmented capacitive digital-to-analog converter (CDAC) arrays as well as the synchronous data-weighted averaging (DWA) calibration block, the proposed SAR ADC can operate from 1.8 V down to 0.8 V at 20–200 kS/s. The designed ADC is fabricated in a 0.18- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu {\mathrm{ m}}$ </tex-math></inline-formula> CMOS process and the measurement results show the proposed SAR ADC achieves an SNDR of 65-dB with power consumption of 647 nW from a 0.8 V power supply at 20 kS/s.
An Energy-Efficient SAR ADC With Event-Triggered Error Correction This brief presents an energy-efficient fully differential 10-bit successive approximation register (SAR) analog-to-digital converter (ADC) with a resolution of 10 bits and a sampling rate of 320 kS/s. The optimal capacitor split and bypass number is analyzed to achieve the highest switching energy efficiency. The common-mode voltage level remains constant during the MSB-capacitor switching cycles. To minimize nonlinearity due to charge averaging voltage offset or DAC array mismatch, an event-triggered error correction method is employed as a redundant cycle for detecting digital code errors within 1 least significant bit (LSB). A test chip was fabricated using the 180-nm CMOS process and occupied a 0.0564-mm <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> core area. Under a regular 0.65-V supply voltage, the ADC achieved an effective number of bits of 9.61 bits and a figure of merit (FOM) of 6.38 fJ/conversion step, with 1.6- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${ \mu }\text{W}$ </tex-math></inline-formula> power dissipation for a low-frequency input. The measured differential and integral nonlinearity results are within 0.30 LSB and 0.43 LSB, respectively.
A 1V 3.5 μW Bio-AFE With Chopper-Capacitor-Chopper Integrator-Based DSL and Low Power GM-C Filter This brief presents a low-noise, low-power bio-signal acquisition analog front-end (Bio-AFE). It mainly includes a capacitively coupled chopper-stabilized instrumentation amplifier (CCIA), a programmable gain amplifier (PGA), a low-pass filter (LPF), and a successive approximation analog to digital converter (SAR ADC). A chopper-capacitor-chopper integrator based DC servo loop (C3IB-DSL...
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
Building efficient wireless sensor networks with low-level naming In most distributed systems, naming of nodes for low-level communication leverages topological location (such as node addresses) and is independent of any application. In this paper, we investigate an emerging class of distributed systems where low-level communication does not rely on network topological location. Rather, low-level communication is based on attributes that are external to the network topology and relevant to the application. When combined with dense deployment of nodes, this kind of named data enables in-network processing for data aggregation, collaborative signal processing, and similar problems. These approaches are essential for emerging applications such as sensor networks where resources such as bandwidth and energy are limited. This paper is the first description of the software architecture that supports named data and in-network processing in an operational, multi-application sensor-network. We show that approaches such as in-network aggregation and nested queries can significantly affect network traffic. In one experiment aggregation reduces traffic by up to 42% and nested queries reduce loss rates by 30%. Although aggregation has been previously studied in simulation, this paper demonstrates nested queries as another form of in-network processing, and it presents the first evaluation of these approaches over an operational testbed.
Formal verification in hardware design: a survey In recent years, formal methods have emerged as an alternative approach to ensuring the quality and correctness of hardware designs, overcoming some of the limitations of traditional validation techniques such as simulation and testing.There are two main aspects to the application of formal methods in a design process: the formal framework used to specify desired properties of a design and the verification techniques and tools used to reason about the relationship between a specification and a corresponding implementation. We survey a variety of frameworks and techniques proposed in the literature and applied to actual designs. The specification frameworks we describe include temporal logics, predicate logic, abstraction and refinement, as well as containment between &ohgr;-regular languages. The verification techniques presented include model checking, automata-theoretic techniques, automated theorem proving, and approaches that integrate the above methods.In order to provide insight into the scope and limitations of currently available techniques, we present a selection of case studies where formal methods were applied to industrial-scale designs, such as microprocessors, floating-point hardware, protocols, memory subsystems, and communications hardware.
Exploiting availability prediction in distributed systems Loosely-coupled distributed systems have significant scale and cost advantages over more traditional architectures, but the availability of the nodes in these systems varies widely. Availability modeling is crucial for predicting per-machine resource burdens and understanding emergent, system-wide phenomena. We present new techniques for predicting availability and test them using traces taken from three distributed systems. We then describe three applications of availability prediction. The first, availability-guided replica placement, reduces object copying in a distributed data store while increasing data availability. The second shows how availability prediction can improve routing in delay-tolerant networks. The third combines availability prediction with virus modeling to improve forecasts of global infection dynamics.
Chameleon: a dual-mode 802.11b/Bluetooth receiver system design In this paper, an approach to map the Bluetooth and 802.11b standards specifications into an architecture and specifications for the building blocks of a dual-mode direct conversion receiver is proposed. The design procedure focuses on optimizing the performance in each operating mode while attaining an efficient dual-standard solution. The impact of the expected receiver nonidealities and the characteristics of each building block are evaluated through bit-error-rate simulations. The proposed receiver design is verified through a fully integrated implementation from low-noise amplifier to analog-to-digital converter using IBM 0.25-μm BiCMOS technology. Experimental results from the integrated prototype meet the specifications from both standards and are in good agreement with the target sensitivity.
An efficient low-cost fixed-point digital down converter with modified filter bank In radar system, as the most important part of IF radar receiver, digital down converter (DDC) extracts the baseband signal needed from modulated IF signal, and down-samples the signal with decimation factor of 20. This paper proposes an efficient low-cost structure of DDC, including NCO, mixer and a modified filter bank. The modified filter bank adopts a high-efficiency structure, including a 5-stage CIC filter, a 9-tap CFIR filter and a 15-tap HB filter, which reduces the complexity and cost of implementation compared with the traditional filter bank. Then an optimized fixed-point programming is designed in order to implement DDC on fixed-point DSP or FPGA. The simulation results show that the proposed DDC achieves an expectant specification in application of IF radar receiver.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.2
0.2
0.2
0.2
0.2
0.2
0.1
0
0
0
0
0
0
0
A new class of asynchronous A/D converters based on time quantization This work is a contribution to a drastic change in standard signal processing chains. The main objective is to reduce the power consumption by one or two orders of magnitude. Integrated Smart Devices and Communicating Objects are application domains targeted by this work. In this context, we present a new class of Analog-to-Digital Converters (ADCs), based on an irregular sampling of the analog signal, and an asynchronous design. Because they are not conventional, a complete design methodology is presented. It determines their characteristics given the required effective number of bits and the analog signal properties. it is shown that our approach leads to a significant reduction in terms of hardware complexity and power consumption. A prototype has been designed for speech applications, using the STMicroelectronics 0.18-μm CMOS technology. Electrical simulations prove that the factor of merit is increased by more than one order of magnitude compared to synchronous Nyquist ADCs.
MEMS-based RF channel selection for true software-defined cognitive radio and low-power sensor communications An evaluation of the potential for MEMS technologies to realize the RF front-end frequency gating spectrum analyzer function needed by true software-defined cognitive radios and ultra-low-power autonomous sensor network radios is presented. Here, RF channel selection, as opposed to band selection that removes all interferers, even those in band, and passes only the desired channel, is key to substantial potential increases in call volume with simultaneous reductions in power consumption. The relevant MEMS technologies most conducive to RF channel- selecting front-ends include vibrating micromechanical resonators that exhibit record on-chip Qs at gigahertz frequencies; resonant switches that provide extremely efficient switched-mode power gain for both transmit and receive paths; medium-scale integrated micromechanical circuits that implement on/off switchable filter-amplifier banks; and fabrication technologies that integrate MEMS together with foundry CMOS transistors in a fully monolithic low-capacitance single-chip process. The many issues that make realization of RF channel selection a truly challenging proposition include resonator drift stability, mechanical circuit complexity, repeatability and fabrication tolerances, and the need for resonators at gigahertz frequencies with simultaneous high Ω (>30,000) and low impedance (e.g., 50 W for conventional systems). Some perspective on which resonator technologies might best achieve these simultaneous attributes is provided.
Event-Driven GHz-Range Continuous-Time Digital Signal Processor With Activity-Dependent Power Dissipation Presented is a clockless, continuous-time (CT) GHz processor that bypasses some of the limitations of conventional digital and analog implementations. Per-edge digital signal encoding is used for parallel processing of continuous-time samples with a temporal spacing as narrow as 15 ps, generated by a 3-b CT flash ADC. Parallel digital delay chains and programmable charge pumps realize the asynchronous filtering operation, each consuming negligible power while awaiting a new sample. A six-tap CT ADC and CT digital FIR processor system occupies 0.07 mm2 and achieves dynamic range of over 20 dB in the 0.8-3.2-GHz signal range. The system's rate of operations automatically adapts to the signal, thus causing its power dissipation to vary in the range of 1.1 to 10 mW according to input activity.
A Nonuniform Sampling ADC Architecture With Reconfigurable Digital Anti-Aliasing Filter. This work proposes a nonuniform sampling analog-to-digital converter (ADC) architecture that incorporates a reconfigurable digital anti-aliasing (AA) filter in the asynchronous digital domain. Considering applications where the signal frequency, bandwidth, or activity may significantly vary over time and operating conditions, it provides high flexibility, relaxes analog AA filter requirements, ada...
Multiple Event Time-to-Digital Conversion-Based Pulse Digitization for a 250 MHz Pulse Radio Ranging Application A pulse digitizing approach for time-of-arrival pulse radio based ranging is introduced. It is based on a bank of time-to-digital converter (TDC) cores. A comparator bank triggers these multiple TDCs. This multiple event approach has advantages over classic single TDC solutions when facing unknown channel gains, noise corruption, and strong fading channel behavior. Pulses are digitized in a way that is superior in terms of performance versus power to classic A/D conversion. A power effort figure ξ and a new SNDR metric are introduced, easing performance comparison of pulse digitizers. A low power 8 channel digitizing system with a resolution of δtring=62.5 ps is presented for a cm accurate ranging application. The asynchronous, event-based nature of the architecture requires nonstrobed comparators to fire value crossing events. A dynamic range of 800:1 is realized. The digitization device is designed for 130 nm standard CMOS. An analog-baseband front-end I-Q energy detection and comparator threshold level configuration D/As are added to the design. The complete system is designed to consume 4 mW.
A Robust Qrs Detection And Accurate R-Peak Identification Algorithm For Wearable Ecg Sensors This paper presents a robust QRS detection algorithm that is capable of detecting QRS complexes as well as accurately identifying R-peaks. The proposed bilateral threshold scheme combined with QRS watchdog greatly improves the detection accuracy and robustness, resulting in consistent detection performance on 9 available ECG databases. Simulations show that the proposed algorithm achieves good results on the datasets from both QTDB healthy database and MITDB arrhythmia database, i.e. the sensitivity of 99.99% and 99.88%, the precision of 99.98% and 99.88%, and the detection error rate of 0.04% and 0.31%, respectively. Furthermore, it also outperforms many existing algorithms on six other ECG databases, such as NSTDB, TWADB, STDB, SVDB, AFTDB, and FANTASIADB.
Pseudo Asynchronous Level Crossing adc for ecg Signal Acquisition. A new pseudo asynchronous level crossing analogue-to-digital converter (ADC) architecture targeted for low-power, implantable, long-term biomedical sensing applications is presented. In contrast to most of the existing asynchronous level crossing ADC designs, the proposed design has no digital-to-analogue converter (DAC) and no continuous time comparators. Instead, the proposed architecture uses a...
A 1.2-V 165-μW 0.29-mm2 multibit Sigma-Delta ADC for hearing aids using nonlinear DACs and with over 91 dB dynamic-range. This paper describes the design and experimental evaluation of a multibit Sigma-Delta (ΣΔ) modulator (ΣΔM) with enhanced dynamic range (DR) through the use of nonlinear digital-to-analog converters (DACs) in the feedback paths. This nonlinearity imposes a trade-off between DR and distortion, which is well suited to the intended hearing aid application. The modulator proposed here uses a fully-diff...
All-digital PLL and transmitter for mobile phones We present the first all-digital PLL and polar transmitter for mobile phones. They are part of a single-chip GSM/EDGE transceiver SoC fabricated in a 90 nm digital CMOS process. The circuits are architectured from the ground up to be compatible with digital deep-submicron CMOS processes and be readily integrateable with a digital baseband and application processor. To achieve this, we exploit the new paradigm of a deep-submicron CMOS process environment by leveraging on the fast switching times of MOS transistors, the fine lithography and the precise device matching, while avoiding problems related to the limited voltage headroom. The transmitter architecture is fully digital and utilizes the wideband direct frequency modulation capability of the all-digital PLL. The amplitude modulation is realized digitally by regulating the number of active NMOS transistor switches in accordance with the instantaneous amplitude. The conventional RF frequency synthesizer architecture, based on a voltage-controlled oscillator and phase/frequency detector and charge-pump combination, has been replaced with a digitally controlled oscillator and a time-to-digital converter. The transmitter performs GMSK modulation with less than 0.5° rms phase error, -165 dBc/Hz phase noise at 20 MHz offset, and 10 μs settling time. The 8-PSK EDGE spectral mask is met with 1.2% EVM. The transmitter occupies 1.5 mm2 and consumes 42 mA at 1.2 V supply while producing 6 dBm RF output power.
Scratchpad memory: design alternative for cache on-chip memory in embedded systems In this paper we address the problem of on-chip memory selection for computationally intensive applications, by proposing scratch pad memory as an alternative to cache. Area and energy for different scratch pad and cache sizes are computed using the CACTI tool while performance was evaluated using the trace results of the simulator. The target processor chosen for evaluation was AT91M40400. The results clearly establish scratehpad memory as a low power alternative in most situations with an average energy reducation of 40%. Further the average area-time reduction for the seratchpad memory was 46% of the cache memory.
Agreement in directed dynamic networks We study the fundamental problem of achieving consensus in a synchronous dynamic network, where an omniscient adversary controls the unidirectional communication links. Its behavior is modeled as a sequence of directed graphs representing the active (i.e. timely) communication links per round. We prove that consensus is impossible under some natural weak connectivity assumptions, and introduce vertex-stable root components as a--practical and not overly strong--means for circumventing this impossibility. Essentially, we assume that there is a short period of time during which an arbitrary part of the network remains strongly connected, while its interconnect topology keeps changing continuously. We present a consensus algorithm that works under this assumption, and prove its correctness. Our algorithm maintains a local estimate of the communication graphs, and applies techniques for detecting stable network properties and univalent system configurations. Our possibility results are complemented by several impossibility results and lower bounds, which reveal that our algorithm is asymptotically optimal.
Robust fuzzy tracking control for robotic manipulators In this paper, a stable adaptive fuzzy-based tracking control is developed for robot systems with parameter uncertainties and external disturbance. First, a fuzzy logic system is introduced to approximate the unknown robotic dynamics by using adaptive algorithm. Next, the effect of system uncertainties and external disturbance is removed by employing an integral sliding mode control algorithm. Consequently, a hybrid fuzzy adaptive robust controller is developed such that the resulting closed-loop robot system is stable and the trajectory tracking performance is guaranteed. The proposed controller is appropriate for the robust tracking of robotic systems with system uncertainties. The validity of the control scheme is shown by computer simulation of a two-link robotic manipulator.
Design of A Transformer-Based Reconfigurable Digital Polar Doherty Power Amplifier Fully Integrated in Bulk CMOS This paper presents a digital polar Doherty power amplifier (PA) fully integrated in a 65 nm bulk CMOS process. It achieves +27.3 dBm peak output power and 32.5% peak PA drain efficiency at 3.82 GHz and 3.60 GHz, respectively. The proposed digital Doherty PA architecture optimizes the cooperation of the main and auxiliary amplifiers and achieves superior back-off efficiency enhancement (a maximum 47.9% improvement versus the corresponding Class-B operation). This digital intensive architecture also allows in-field PA reconfigurability which both provides robust PA operation against antenna mismatches and allows flexible trade-off optimization on PA efficiency and linearity. Transformer-based passives are employed as the Doherty input and output networks. The input 90 signal splitter is realized by a 6-port folded differential transformer structure. The active Doherty load modulation and power combining at the PA output are achieved by two transformers in a parallel configuration. These transformer-based passives ensure an ultra-compact PA design (2.1 mm ) and broad bandwidth (24.9% for 1 dB P bandwidth). Measurement with 1 MSym/s QPSK signal shows 3.5% rms EVM with +23.5 dBm average output power and 26.8% PA drain efficiency. Measurement with 16-QAM signal exhibits the PA's flexibility on optimizing efficiency and linearity.
A 1V 3.5 μW Bio-AFE With Chopper-Capacitor-Chopper Integrator-Based DSL and Low Power GM-C Filter This brief presents a low-noise, low-power bio-signal acquisition analog front-end (Bio-AFE). It mainly includes a capacitively coupled chopper-stabilized instrumentation amplifier (CCIA), a programmable gain amplifier (PGA), a low-pass filter (LPF), and a successive approximation analog to digital converter (SAR ADC). A chopper-capacitor-chopper integrator based DC servo loop (C3IB-DSL...
1.017021
0.018363
0.016554
0.014818
0.013333
0.013333
0.006667
0.000667
0.000048
0
0
0
0
0
A fully-adaptive wideband 0.5-32.75Gb/s FPGA transceiver in 16nm FinFET CMOS technology. This paper describes the design of a low power fully-adaptive wideband, flexible reach transceiver in 16nm FinFET CMOS embedded within FPGA. The receiver utilizes a 3-stage CTLE with a segmented AGC to minimize parasitic peaking and 15-tap DFE to operate over both short and long channels. The transmitter uses a swing boosted CML driver architecture. Low noise wideband fractional N LC PLLs combined with linear active inductor based phase interpolators and high speed clocking are utilized for low jitter clock generation. The transceiver achieves >1200mV(dpp) TX swing with <190 fs RJ and 5.39 ps TJ to achieve BER < 10(-15) over a 30 dB loss backplane at 32.75 Gb/s, while consuming 577 mW.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Toward digital transmitters with Amplitude Shift Keying and Quadrature Amplitude Modulators implementation examples This paper presents new methods of implementing different kinds of Amplitude Shift Keying (ASK) and Quadrature Amplitude Modulation (QAM) signal modulators using Field Programmable Gate Array (FPGA). The implemented ASK modulators are On-Off Keying (OOK), Binary ASK (BASK), and 4ASK while the implemented QAM modulators are 4QAM and 16QAM. The targeted board for implementation was a ZYBO board which has a Zynq-7000 FPGA. One of the main tasks in implementing any digital transmitter including ASK is the generation of the sine wave carrier. In order to do that, a 24-bit phase accumulator and Look Up Table (LUT) has been used based on Direct Digital Synthesizer (DDS) technique. After getting the carrier generated, the ASK modulated signal is nothing more than the carrier signal itself at different amplitudes. For the implementation of QAM modulators, two sinusoidal carriers are needed. To get the two carriers with 90-degree phase shifts, two 24-bit accumulators working on the rising and falling edge of the main system clock were used. These two carriers were used as inputs to the multiplexer circuit which used the binary information data as a selector. Based on the incoming data, different combinations of the carriers are generated. The resultant combinations represent the QAM modulated signal. The implementation of the entire systems was done in Very high speed integrated circuit Hardware Description Language (VHDL) without the help of Xilinx System Generator or DSP Builder Tools. The implemented systems were downloaded onto the ZYBO board in order to evaluate the utilization of available resources. Low utilizations were achieved in all implemented examples.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Top K-leader election in mobile ad hoc networks Many applications in mobile ad hoc networks (MANETs) require multiple nodes to act as leaders. Given the resource constraints of mobile nodes, it is desirable to elect resource-rich nodes with higher energy or computational capabilities as leaders. In this paper, we propose a novel distributed algorithm to elect top-K weighted leaders in MANETs where weight indicates available node resources. Frequent topology changes, limited energy supplies, and variable message delays in MANETs make the weight-based K leader election a non-trivial task. So far, there is no algorithm for weight-based K leader election in distributed or mobile environments. Moreover, existing single leader election algorithms for ad hoc networks are either unsuitable of extending to elect weight-based K leaders or they perform poorly under dynamic network conditions. In our proposed algorithm, initially few coordinator nodes are selected locally which collect the weights of other nodes using the diffusing computation approach. The coordinator nodes then collaborate together, so that, finally the highest weight coordinator collects weights of all the nodes in the network. Besides simulation we have also implemented our algorithm on a testbed and conducted experiments. The results prove that our proposed algorithm is scalable, reliable, message-efficient, and can handle dynamic topological changes in an efficient manner.
Mechanism Design-Based Secure Leader Election Model for Intrusion Detection in MANET In this paper, we study leader election in the presence of selfish nodes for intrusion detection in mobile ad hoc networks (MANETs). To balance the resource consumption among all nodes and prolong the lifetime of an MANET, nodes with the most remaining resources should be elected as the leaders. However, there are two main obstacles in achieving this goal. First, without incentives for serving others, a node might behave selfishly by lying about its remaining resources and avoiding being elected. Second, electing an optimal collection of leaders to minimize the overall resource consumption may incur a prohibitive performance overhead, if such an election requires flooding the network. To address the issue of selfish nodes, we present a solution based on mechanism design theory. More specifically, the solution provides nodes with incentives in the form of reputations to encourage nodes in honestly participating in the election process. The amount of incentives is based on the Vickrey, Clarke, and Groves (VCG) model to ensure truth-telling to be the dominant strategy for any node. To address the optimal election issue, we propose a series of local election algorithms that can lead to globally optimal election results with a low cost. We address these issues in two possible application settings, namely, Cluster-Dependent Leader Election (CDLE) and Cluster-Independent Leader Election (CILE). The former assumes given clusters of nodes, whereas the latter does not require any preclustering. Finally, we justify the effectiveness of the proposed schemes through extensive experiments.
The Eventual Leadership in Dynamic Mobile Networking Environments Eventual leadership has been identified as a basic building block to solve synchronization or coordination problems in distributed computing systems. However, it is a challenging task to implement the eventual leadership facility, especially in dynamic distributed systems, where the global system structure is unknown to the processes and can vary over time. This paper studies the implementation of a leadership facility in infrastructured mobile networks, where an unbounded set of mobile hosts arbitrarily move in the area covered by fixed mobile support stations. Mobile hosts can crash and suffer from disconnections. We develop an eventual leadership protocol based on a time-free approach. The mobile support stations exchange queries and responses on behalf of mobile hosts. With assumptions on the message exchange flow, a correct mobile host is eventually elected as the unique leader. Since no time property is assumed on the communication channels, the proposed protocol is especially effective and efficient in mobile environments, where time-based properties are difficult to satisfy due to the dynamics of the network.
Eventual Leader Election in Evolving Mobile Networks.
Max-Min D-Cluster Formation in Wireless Ad Hoc Networks An ad hoc network may be logically represented as a set of clusters. The clusterheads form a -hop dominating set. Each node is at most hops from a clusterhead. Clusterheads form a virtual backbone and may be used to route packets for nodes in their cluster. Previous heuristics restricted themselves to -hop clusters. We show that the minimum -hop dominating set problem is NP-complete. Then we present a heuristic to form -clusters in a wireless ad hoc network. Nodes are assumed to have non-deterministic mobility pattern. Clusters are formed by diffusing node identities along the wireless links. When the heuristic terminates, a node either becomes a clusterhead, or is at most wireless hops away from its clusterhead. The value of is a parameter of the heuristic. The heuristic can be run either at regular intervals, or whenever the network configura- tion changes. One of the features of the heuristic is that it t ends to re-elect existing clusterheads even when the network configuration c hanges. This helps to reduce the communication overheads during transition from old clusterheads to new clusterheads. Also, there is a tendency to evenly dis- tribute the mobile nodes among the clusterheads, and evently distribute the responsibility of acting as clusterheads among all nodes. Thus, the heuristic is fair and stable. Simulation experiments demonstrate that the proposed heuristic is better than the two earlier heuristics, namely the LCA (1) and Degree based (11) solutions.
Impossibility of distributed consensus with one faulty process The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the “Byzantine Generals” problem.
Computing size-independent matrix problems on systolic array processors A methodology to transform dense to band matrices is presented in this paper. This transformation, is accomplished by triangular blocks partitioning, and allows the implementation of solutions to problems with any given size, by means of contraflow systolic arrays, originally proposed by H.T. Kung. Matrix-vector and matrix-matrix multiplications are the operations considered here.The proposed transformations allow the optimal utilization of processing elements (PEs) of the systolic array when dense matrix are operated. Every computation is made inside the array by using adequate feedback. The feedback delay time depends only on the systolic array size.
A 12 bit 2.9 GS/s DAC With IM3 $ ≪ -$ 60 dBc Beyond 1 GHz in 65 nm CMOS A 12 bit 2.9 GS/s current-steering DAC implemented in 65 nm CMOS is presented, with an IM3 < ¿-60 dBc beyond 1 GHz while driving a 50 ¿ load with an output swing of 2.5 Vppd and dissipating a power of 188 mW. The SFDR measured at 2.9  GS/s is better than 60 dB beyond 340 MHz while the SFDR measured at 1.6 GS/s is better than 60 dB beyond 440 MHz. The increase in performance at high-frequencies, co...
The M-Machine multicomputer The M-Machine is an experimental multicomputer being developed to test architectural concepts motivated by the constraints of modern semiconductor technology and the demands of programming systems. The M-Machine computing nodes are connected with a 3-D mesh network; each node is a multithreaded processor incorporating 9 function units, on-chip cache, and local memory. The multiple function units are used to exploit both instruction-level and thread-level parallelism. A user accessible message passing system yields fast communication and synchronization between nodes. Rapid access to remote memory is provided transparently to the user with a combination of hardware and software mechanisms. This paper presents the architecture of the M-Machine and describes how its mechanisms attempt to maximize both single thread performance and overall system throughput. The architecture is complete and the MAP chip, which will serve as the M-Machine processing node, is currently being implemented.
SPONGENT: a lightweight hash function This paper proposes spongent - a family of lightweight hash functions with hash sizes of 88 (for preimage resistance only), 128, 160, 224, and 256 bits based on a sponge construction instantiated with a present-type permutation, following the hermetic sponge strategy. Its smallest implementations in ASIC require 738, 1060, 1329, 1728, and 1950 GE, respectively. To our best knowledge, at all security levels attained, it is the hash function with the smallest footprint in hardware published so far, the parameter being highly technology dependent. spongent offers a lot of flexibility in terms of serialization degree and speed. We explore some of its numerous implementation trade-offs. We furthermore present a security analysis of spongent. Basing the design on a present-type primitive provides confidence in its security with respect to the most important attacks. Several dedicated attack approaches are also investigated.
CoCo: coding-based covert timing channels for network flows In this paper, we propose CoCo, a novel framework for establishing covert timing channels. The CoCo covert channel modulates the covert message in the inter-packet delays of the network flows, while a coding algorithm is used to ensure the robustness of the covert message to different perturbations. The CoCo covert channel is adjustable: by adjusting certain parameters one can trade off different features of the covert channel, i.e., robustness, rate, and undetectability. By simulating the CoCo covert channel using different coding algorithms we show that CoCo improves the covert robustness as compared to the previous research, while being practically undetectable.
Design of ultra-wide-load, high-efficient DC-DC buck converters The paper presents the design of a current-mode control DC-DC buck converter with pulse-width modulation (PWM) mode. The converter achieves a current load ranged from 50 mA to 500 mA over 90% efficiency, and the maximum power efficiency is 95.6%, where the circuit was simulated with the TSMC 0.35 um CMOS process. In order to achieve ultra-wide-load high efficiency, this paper implements with two PMOS transistors as switches. Results show that the converter achieves above 90% efficiency at the range from 30 mA to 1200 mA with a maximum efficiency of 96.36%. Results show that, with the additional switch transistor, the current load range is expanded more than double. With two PMOS transistors, the proposed converter can also achieve 3 different load ranges so that it can be programmed for the applications which are operated at those three different load ranges.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1.066667
0.033333
0.033333
0.013333
0.006061
0
0
0
0
0
0
0
0
0
A 0.4–6 GHz Receiver for Cellular and WiFi Applications A wideband RF receiver employs a multi-loop architecture to ease the tradeoff between noise and linearity, a new method of harmonic rejection that relaxes gain and phase matching requirements, and a new op amp topology to achieve a wide bandwidth with low power consumption. Furthermore, multiple techniques are introduced to improve the out-of-band and in-band linearity, input matching, and stability. Fabricated in 28-nm CMOS technology, the prototype accommodates channel bandwidths from 200 kHz to 160 MHz and exhibits a noise figure of 2.1–4.42 dB while drawing 23–49 mW. It demonstrates an out-of-band IIP3 of 2.8–9.8 dBm and provides more than 60 dB of rejection for blockers at the third and fifth harmonics of the LO.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A 32-Channel Time-Multiplexed Artifact-Aware Neural Recording System This paper presents a low-power, low-noise microsystem for the recording of neural local field potentials or intracranial electroencephalographic signals. It features 32 time-multiplexed channels at the electrode interface and offers the possibility to spatially delta encode data to take advantage of the large correlation of signals captured from nearby channels. The circuit also implements a mixed-signal voltage-triggered auto-ranging algorithm which allows to attenuate large interferers in digital domain while preserving neural information. This effectively increases the system dynamic range and avoids the onset of saturation. A prototype, fabricated in a standard 180 nm CMOS process, has been experimentally verified <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">in-vitro</i> with cellular cultures of primary cortical neurons from mice. The system shows an integrated input-referred noise in the 0.5–200 Hz band of 1.4 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {\mu V_{rms}}$</tex-math></inline-formula> for a spot noise of about 85 nV <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf {/\sqrt{Hz}}$</tex-math></inline-formula> . The system draws 1.5 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\boldsymbol{\mu}$</tex-math></inline-formula> W per channel from 1.2 V supply and obtains 71 dB + 26 dB dynamic range when the artifact-aware auto-ranging mechanism is enabled, without penalising other critical specifications such as crosstalk between channels or common-mode and power supply rejection ratios.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Power management for portable devices Today's portable devices combine wireless communication and navigation with consumer electronics. As a consequence, the computing power, the size of the display and the graphics operations of portable devices are increasing by a large amount. Even when changing to new process technologies for the processors, the power consumption often increases due to the higher operating frequencies. The battery technology developments are not improving by the same factor and as a result, intelligent power management becomes mandatory to achieve the required operating hours and days. In addition, the portable devices become smaller and slimmer which requires a reduction of the number and the size of the components. In this paper, a highly integrated power management IC will be presented and power management trends and requirements for portable devices will be discussed.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Conductance modulation techniques in switched-capacitor DC-DC converter for maximum-efficiency tracking and ripple mitigation in 22nm Tri-gate CMOS Active conduction modulation techniques are demonstrated in a fully integrated multi-ratio switched-capacitor voltage regulator with hysteretic control, implemented in 22nm tri-gate CMOS with high-density MIM capacitor. We present (i) an adaptive switching frequency and switch-size scaling scheme for maximum efficiency tracking across a wide range voltages and currents, governed by a frequency-based control law that is experimentally validated across multiple dies and temperatures, and (ii) a simple active ripple mitigation technique to modulate gate drive of select MOSFET switches effectively in all conversion modes. Efficiency boosts upto 15% at light loads are measured under light load conditions. Load-independent output ripple of <;50mV is achieved, enabling fewer interleaving. Testchip implementations and measurements demonstrate ease of integration in SoC designs, power efficiency benefits and EMI/RFI improvements.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Loihi: A Neuromorphic Manycore Processor with On-Chip Learning. Loihi is a 60-mm2 chip fabricated in Intels 14-nm process that advances the state-of-the-art modeling of spiking neural networks in silicon. It integrates a wide range of novel features for the field, such as hierarchical connectivity, dendritic compartments, synaptic delays, and, most importantly, programmable synaptic learning rules. Running a spiking convolutional form of the Locally Competitiv...
Networks of spiking neurons: the third generation of neural network models The computational power of formal models for networks of spiking neurons is compared with that of other neural network models based on McCulloch Pitts neurons (i.e., threshold gates), respectively, sigmoidal gates. In particular it is shown that networks of spiking neurons are, with regard to the number of neurons that are needed, computationally more powerful than these other neural network models. A concrete biologically relevant function is exhibited which can be computed by a single spiking neuron (for biologically reasonable values of its parameters), but which requires hundreds of hidden units on a sigmoidal neural net. On the other hand, it is known that any function that can be computed by a small sigmoidal neural net can also be computed by a small network of spiking neurons. This article does not assume prior knowledge about spiking neurons, and it contains an extensive list of references to the currently available literature on computations in networks of spiking neurons and relevant results from neurobiology.
First-Spike-Based Visual Categorization Using Reward-Modulated STDP. Reinforcement learning (RL) has recently regained popularity with major achievements such as beating the European game of Go champion. Here, for the first time, we show that RL can be used efficiently to train a spiking neural network (SNN) to perform object recognition in natural images without using an external classifier. We used a feedforward convolutional SNN and a temporal coding scheme wher...
STDP-Based Pruning of Connections and Weight Quantization in Spiking Neural Networks for Energy-Efficient Recognition Spiking neural networks (SNNs) with a large number of weights and varied weight distribution can be difficult to implement in emerging in-memory computing hardware due to the limitations on crossbar size (implementing dot product), the constrained number of conductance states in non-CMOS devices and the power budget. We present a sparse SNN topology where noncritical connections are pruned to reduce the network size, and the remaining critical synapses are weight quantized to accommodate for limited conductance states. Pruning is based on the power law weight-dependent spike timing dependent plasticity model; synapses between pre- and post-neuron with high spike correlation are retained, whereas synapses with low correlation or uncorrelated spiking activity are pruned. The weights of the retained connections are quantized to the available number of conductance states. The process of pruning noncritical connections and quantizing the weights of critical synapses is performed at regular intervals during training. We evaluated our sparse and quantized network on MNIST dataset and on a subset of images from Caltech-101 dataset. The compressed topology achieved a classification accuracy of 90.1% (91.6%) on the MNIST (Caltech-101) dataset with 3.1X (2.2X) and 4X (2.6X) improvement in energy and area, respectively. The compressed topology is energy and area efficient while maintaining the same classification accuracy of a 2-layer fully connected SNN topology.
Application of Deep Compression Technique in Spiking Neural Network Chip. In this paper, a reconfigurable and scalable spiking neural network processor, containing 192 neurons and 6144 synapses, is developed. By using deep compression technique in spiking neural network chip, the amount of physical synapses can be reduced to 1/16 of that needed in the original network, while the accuracy is maintained. This compression technique can greatly reduce the number of SRAMs inside the chip as well as the power consumption of the chip. This design achieves throughput per unit area of 1.1 GSOP/( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\text{s}\!\cdot\!\text{mm}^2$</tex-math></inline-formula> ) at 1.2 V, and energy consumed per SOP of 35 pJ. A 2-layer fully-connected spiking neural network is mapped to the chip, and thus the chip is able to realize handwritten digit recognition on MNIST with an accuracy of 91.2%.
A Low-Cost High-Speed Neuromorphic Hardware Based on Spiking Neural Network Neuromorphic is a relatively new interdisciplinary research topic, which employs various fields of science and technology, such as electronic, computer, and biology. Neuromorphic systems consist of software/hardware systems, which are utilized to implement the neural networks based on human brain functionalities. The goal of neuromorphic systems is to mimic the biologically inspired concepts of th...
Minitaur, an Event-Driven FPGA-Based Spiking Network Accelerator Current neural networks are accumulating accolades for their performance on a variety of real-world computational tasks including recognition, classification, regression, and prediction, yet there are few scalable architectures that have emerged to address the challenges posed by their computation. This paper introduces Minitaur, an event-driven neural network accelerator, which is designed for low power and high performance. As an field-programmable gate array-based system, it can be integrated into existing robotics or it can offload computationally expensive neural network tasks from the CPU. The version presented here implements a spiking deep network which achieves 19 million postsynaptic currents per second on 1.5 W of power and supports up to 65 K neurons per board. The system records 92% accuracy on the MNIST handwritten digit classification and 71% accuracy on the 20 newsgroups classification data set. Due to its event-driven nature, it allows for trading off between accuracy and latency.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Edge Computing: Vision and Challenges. The proliferation of Internet of Things (IoT) and the success of rich cloud services have pushed the horizon of a new computing paradigm, edge computing, which calls for processing the data at the edge of the network. Edge computing has the potential to address the concerns of response time requirement, battery life constraint, bandwidth cost saving, as well as data safety and privacy. In this pap...
From Static Distributed Systems to Dynamic Systems A noteworthy advance in distributed computing is due to the recent development of peer-to-peer systems. These systems are essentially dynamic in the sense that no process can get a global knowledge on the system structure. They mainly allow processes to look up for data that can be dynamically added/suppressed in a permanently evolving set of nodes. Although protocols have been developed for such dynamic systems, to our knowledge, up to date no computation model for dynamic systems has been proposed. Nevertheless, there is a strong demand for the de?nition of such models as soon as one wants to develop provably correct protocols suited to dynamic systems. This paper proposes a model for (a class of) dynamic systems. That dynamic model is de?ned by (1) a parameter (an integer denoted a) and (2) two basic communication abstractions (query-response and persistent reliable broadcast). The new parameter a is a threshold value introduced to capture the liveness part of the system (it is the counterpart of the minimal number of processes that do not crash in a static system). To show the relevance of the model, the paper adapts an eventual leader protocol designed for the static model, and proves that the resulting protocol is correct within the proposed dynamic model. In that sense, the paper has also a methodological ?avor, as it shows that simple modi?cations to existing protocols can allow them to work in dynamic systems.
Fully Monolithic Cellular Buck Converter Design for 3-D Power Delivery A fully monolithic interleaved buck dc-dc point-of-load (PoL) converter has been designed and fabricated in a 0.18-mm SiGe BiCMOS process. Target application of the design is 3-D power delivery for future microprocessors, in which the PoL converter will be vertically integrated with the processor using wafer-level 3-D interconnect technologies. Advantages of 3-D power delivery over conventional discrete voltage regulator modules (VRMs) are discussed. The prototype design, using two interleaved buck converter cells each operating at 200 MHz switching frequency and delivering 500 mA output current, is discussed with a focus on the converter power stage and control loop to highlight the tradeoffs unique to such high-frequency, monolithic designs. Measured steady-state and dynamic responses of the fabricated prototype are presented to demonstrate the ability of such monolithic converters to meet the power delivery requirements of future processors.
Analysis and Design of Passive Polyphase Filters Passive RC polyphase filters (PPFs) are analyzed in detail in this paper. First, a method to calculate the output signals of an n-stage PPF is presented. As a result, all relevant properties of PPFs, such as amplitude and phase imbalance and loss, are calculated. The rules for optimal pole frequency planning to maximize the image-reject ratio provided by a PPF are given. The loss of PPF is divided into two factors, namely the intrinsic loss caused by the PPF itself and the loss caused by termination impedances. Termination impedances known a priori can be used to derive such component values, which minimize the overall loss. The effect of parasitic capacitance and component value deviation are analyzed and discussed. The method of feeding the input signal to the first PPF stage affects the mechanisms of the whole PPF. As a result, two slightly different PPF topologies can be distinguished, and they are separately analyzed and compared throughout this paper. A design example is given to demonstrate the developed design procedure.
Primary Frequency Response From Electric Vehicles in the Great Britain Power System With the increasing use of renewable energy in the Great Britain (GB) power system, the role of electric vehicles (EVs) contributes to primary frequency response was investigated. A tool was developed to estimate the EV charging load based on statistical analysis of EV type, battery capacity, maximum travel range and battery state of charge. A simplified GB power system model was used to investigate the contribution of EVs to primary frequency response. Two control modes were considered: disconnection of charging load (case I) and discharge of stored battery energy (case II). For case II, the characteristic of the EV charger was also considered. A case study shows results for the year 2020. Three EV charging strategies: “dumb” charging, “off-peak” charging, and “smart” charging, were compared. Simulation results show that utilizing EVs to stabilize the grid frequency in the GB system can significantly reduce frequency deviations. However the requirement to schedule frequency response from conventional generators is dynamic throughout the day.
A Sub- $\mu$ W Reconfigurable Front-End for Invasive Neural Recording That Exploits the Spectral Characteristics of the Wideband Neural Signal This paper presents a sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> ac-coupled reconfigurable front-end for invasive wideband neural signal recording. The proposed topology embeds filtering capabilities enabling the selection of different frequency bands inside the neural signal spectrum. Power consumption is optimized by defining specific noise targets for each sub-band. These targets take into account the spectral characteristics of wideband neural signals: local field potentials (LFP) exhibit <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mathrm {1/f^{x}}$ </tex-math></inline-formula> magnitude scaling while action potentials (AP) show uniform magnitude across frequency. Additionally, noise targets also consider electrode noise and the spectral distribution of noise sources in the circuit. An experimentally verified prototype designed in a standard 180 nm CMOS process draws 815 nW from a 1 V supply. The front-end is able to select among four different frequency bands (modes) up to 5 kHz. The measured input-referred spot-noise at 500 Hz in the LFP mode (1 Hz - 700 Hz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$55~nV/\sqrt {Hz}$ </tex-math></inline-formula> while the integrated noise in the AP mode (200 Hz - 5 kHz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$4.1~\mu Vrms$ </tex-math></inline-formula> . The proposed front-end achieves sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> operation without penalizing other specifications such as input swing, common-mode or power-supply rejection ratios. It reduces the power consumption of neural front-ends with spectral selectivity by <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$6.1\times $ </tex-math></inline-formula> and, compared with conventional wideband front-ends, it obtains a reduction of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.5\times $ </tex-math></inline-formula> .
1.0424
0.042
0.042
0.042
0.04
0.04
0.02416
0
0
0
0
0
0
0
A Universal Approximation Result for Difference of Log-Sum-Exp Neural Networks We show that a neural network whose output is obtained as the difference of the outputs of two feedforward networks with exponential activation function in the hidden layer and logarithmic activation function in the output node, referred to as log-sum-exp (LSE) network, is a smooth universal approximator of continuous functions over convex, compact sets. By using a logarithmic transform, this class of network maps to a family of subtraction-free ratios of generalized posynomials (GPOS), which we also show to be universal approximators of positive functions over log-convex, compact subsets of the positive orthant. The main advantage of difference-LSE networks with respect to classical feedforward neural networks is that, after a standard training phase, they provide surrogate models for a design that possesses a specific difference-of-convex-functions form, which makes them optimizable via relatively efficient numerical methods. In particular, by adapting an existing difference-of-convex algorithm to these models, we obtain an algorithm for performing an effective optimization-based design. We illustrate the proposed approach by applying it to the data-driven design of a diet for a patient with type-2 diabetes and to a nonconvex optimization problem.
A Probabilistic Neural-Fuzzy Learning System for Stochastic Modeling A probabilistic fuzzy neural network (PFNN) with a hybrid learning mechanism is proposed to handle complex stochastic uncertainties. Fuzzy logic systems (FLSs) are well known for vagueness processing. Embedded with the probabilistic method, an FLS will possess the capability to capture stochastic uncertainties. Further enhanced with the neural learning, it will be able to work under time-varying stochastic environment. Integrated with a statistical process control (SPC) based monitoring method, the PFNN can maintain the robust modeling performance. Finally, the successful simulation demonstrates the modeling effectiveness of the proposed PFNN under the time-varying stochastic conditions.
Design of Fuzzy-Neural-Network-Inherited Backstepping Control for Robot Manipulator Including Actuator Dynamics This study presents the design and analysis of an intelligent control system that inherits the systematic and recursive design methodology for an n-link robot manipulator, including actuator dynamics, in order to achieve a high-precision position tracking with a firm stability and robustness. First, the coupled higher order dynamic model of an n-link robot manipulator is introduced briefly. Then, a conventional backstepping control (BSC) scheme is developed for the joint position tracking of the robot manipulator. Moreover, a fuzzy-neural-network-inherited BSC (FNNIBSC) scheme is proposed to relax the requirement of detailed system information to improve the robustness of BSC and to deal with the serious chattering that is caused by the discontinuous function. In the FNNIBSC strategy, the FNN framework is designed to mimic the BSC law, and adaptive tuning algorithms for network parameters are derived in the sense of the projection algorithm and Lyapunov stability theorem to ensure the network convergence as well as stable control performance. Numerical simulations and experimental results of a two-link robot manipulator that are actuated by dc servomotors are provided to justify the claims of the proposed FNNIBSC system, and the superiority of the proposed FNNIBSC scheme is also evaluated by quantitative comparison with previous intelligent control schemes.
Reactive Power Control of Three-Phase Grid-Connected PV System During Grid Faults Using Takagi–Sugeno–Kang Probabilistic Fuzzy Neural Network Control An intelligent controller based on the Takagi-Sugeno-Kang-type probabilistic fuzzy neural network with an asymmetric membership function (TSKPFNN-AMF) is developed in this paper for the reactive and active power control of a three-phase grid-connected photovoltaic (PV) system during grid faults. The inverter of the three-phase grid-connected PV system should provide a proper ratio of reactive power to meet the low-voltage ride through (LVRT) regulations and control the output current without exceeding the maximum current limit simultaneously during grid faults. Therefore, the proposed intelligent controller regulates the value of reactive power to a new reference value, which complies with the regulations of LVRT under grid faults. Moreover, a dual-mode operation control method of the converter and inverter of the three-phase grid-connected PV system is designed to eliminate the fluctuation of dc-link bus voltage under grid faults. Furthermore, the network structure, the online learning algorithm, and the convergence analysis of the TSKPFNN-AMF are described in detail. Finally, some experimental results are illustrated to show the effectiveness of the proposed control for the three-phase grid-connected PV system.
Discrete-Time Quasi-Sliding-Mode Control With Prescribed Performance Function and its Application to Piezo-Actuated Positioning Systems. In this paper, the constrained control problem of the prescribed performance control technique is discussed in discrete-time domain for single input-single output dynamical systems. The goal of this design is to maintain the tracking error trajectory in a predefined convergence zone described by a performance function in the presence of the uncertainties. In order to achieve this goal, the discret...
A Survey On Sliding Mode Control For Networked Control Systems In the framework of the networked control systems (NCSs), the components are connected with each other over a shared band-limited network. The merits of NCSs include easy extensibility, resource sharing, high reliability and so forth. However, the insertion of the communication network brings many challenges, such as network-induced phenomena and cyber-security, which should be handled properly. On the other hand, the sliding mode control (SMC) has become an effective scheme for the synthesis of NCSs due to its strong robustness and SMC has wide applications in NCSs. In this paper, some recent advances on SMC for NCSs are reviewed. In particular, some new SMC schemes for NCSs subject to time-delay, packet losses, quantisation and uncertainty/disturbance are summarised firstly. Subsequently, the problem of SMC for NCSs under scheduling protocols is discussed, where different communication protocols are introduced for the energy saving purpose during the synthesis of NCSs. Next, some recent results on SMC for NCSs with actuator/sensor fault and cyber-attack are recalled. Finally, the conclusion is provided and the potential research challenges on SMC for NCSs are pointed out.
The part-time parliament Recent archaeological discoveries on the island of Paxos reveal that the parliament functioned despite the peripatetic propensity of its part-time legislators. The legislators maintained consistent copies of the parliamentary record, despite their frequent forays from the chamber and the forgetfulness of their messengers. The Paxon parliament's protocol provides a new way of implementing the state machine approach to the design of distributed systems.
Design Techniques for Fully Integrated Switched-Capacitor DC-DC Converters. This paper describes design techniques to maximize the efficiency and power density of fully integrated switched-capacitor (SC) DC-DC converters. Circuit design methods are proposed to enable simplified gate drivers while supporting multiple topologies (and hence output voltages). These methods are verified by a proof-of-concept converter prototype implemented in 0.374 mm2 of a 32 nm SOI process. ...
Distributed reset A reset subsystem is designed that can be embedded in an arbitrary distributed system in order to allow the system processes to reset the system when necessary. Our design is layered, and comprises three main components: a leader election, a spanning tree construction, and a diffusing computation. Each of these components is self-stabilizing in the following sense: if the coordination between the up-processes in the system is ever lost (due to failures or repairs of processes and channels), then each component eventually reaches a state where coordination is regained. This capability makes our reset subsystem very robust: it can tolerate fail-stop failures and repairs of processes and channels, even when a reset is in progress
Distributed multi-agent optimization with state-dependent communication We study distributed algorithms for solving global optimization problems in which the objective function is the sum of local objective functions of agents and the constraint set is given by the intersection of local constraint sets of agents. We assume that each agent knows only his own local objective function and constraint set, and exchanges information with the other agents over a randomly varying network topology to update his information state. We assume a state-dependent communication model over this topology: communication is Markovian with respect to the states of the agents and the probability with which the links are available depends on the states of the agents. We study a projected multi-agent subgradient algorithm under state-dependent communication. The state-dependence of the communication introduces significant challenges and couples the study of information exchange with the analysis of subgradient steps and projection errors. We first show that the multi-agent subgradient algorithm when used with a constant stepsize may result in the agent estimates to diverge with probability one. Under some assumptions on the stepsize sequence, we provide convergence rate bounds on a “disagreement metric” between the agent estimates. Our bounds are time-nonhomogeneous in the sense that they depend on the initial starting time. Despite this, we show that agent estimates reach an almost sure consensus and converge to the same optimal solution of the global optimization problem with probability one under different assumptions on the local constraint sets and the stepsize sequence.
Yet another MicroArchitectural Attack:: exploiting I-Cache MicroArchitectural Attacks (MA), which can be considered as a special form of Side-Channel Analysis, exploit microarchitectural functionalities of processor implementations and can compromise the security of computational environments even in the presence of sophisticated protection mechanisms like virtualization and sandboxing. This newly evolving research area has attracted significant interest due to the broad application range and the potentials of these attacks. Cache Analysis and Branch Prediction Analysis were the only types of MA that had been known publicly. In this paper, we introduce Instruction Cache (I-Cache) as yet another source of MA and present our experimental results which clearly prove the practicality and danger of I-Cache Attacks.
A decentralized modular control framework for robust control of FES-activated walker-assisted paraplegic walking using terminal sliding mode and fuzzy logic control. A major challenge to developing functional electrical stimulation (FES) systems for paraplegic walking and widespread acceptance of these systems is the design of a robust control strategy that provides satisfactory tracking performance. The systems need to be robust against time-varying properties of neuromusculoskeletal dynamics, day-to-day variations, subject-to-subject variations, external dis...
PUMP: a programmable unit for metadata processing We introduce the Programmable Unit for Metadata Processing (PUMP), a novel software-hardware element that allows flexible computation with uninterpreted metadata alongside the main computation with modest impact on runtime performance (typically 10--40% for single policies, compared to metadata-free computation on 28 SPEC CPU2006 C, C++, and Fortran programs). While a host of prior work has illustrated the value of ad hoc metadata processing for specific policies, we introduce an architectural model for extensible, programmable metadata processing that can handle arbitrary metadata and arbitrary sets of software-defined rules in the spirit of the time-honored 0-1-∞ rule. Our results show that we can match or exceed the performance of dedicated hardware solutions that use metadata to enforce a single policy, while adding the ability to enforce multiple policies simultaneously and achieving flexibility comparable to software solutions for metadata processing. We demonstrate the PUMP by using it to support four diverse safety and security policies---spatial and temporal memory safety, code and data taint tracking, control-flow integrity including return-oriented-programming protection, and instruction/data separation---and quantify the performance they achieve, both singly and in combination.
A 1V 3.5 μW Bio-AFE With Chopper-Capacitor-Chopper Integrator-Based DSL and Low Power GM-C Filter This brief presents a low-noise, low-power bio-signal acquisition analog front-end (Bio-AFE). It mainly includes a capacitively coupled chopper-stabilized instrumentation amplifier (CCIA), a programmable gain amplifier (PGA), a low-pass filter (LPF), and a successive approximation analog to digital converter (SAR ADC). A chopper-capacitor-chopper integrator based DC servo loop (C3IB-DSL...
1.2
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
0
0
Designing wideband voltage controlled oscillators for Software-Defined Radio The demand for mobile communication systems with global coverage, interfacing with various standards and protocols, high data rates and improved link quality for a variety of applications has dramatically increased in recent years. The Software-Defined Radio (SDR) is the recent proposal to achieve these. In SDR, new concepts and methods, which can optimally exploit the limited resources, are necessary. One very important requirement of SDR is designing wideband RF hardware covering octave bands. For example, the local oscillator of the mixer should be a wideband voltage controlled oscillator (VCO). The paper presents theoretical concepts for designing VCOs. It also provides simulation of the design. Using this conceptual design and simulation, the actual VCO hardware was designed and fabricated, and the results including very good phase noise performance are presented.
Efficiently computing static single assignment form and the control dependence graph In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point that advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present new algorithms that efficiently compute these data structures for arbitrary control flow graphs. The algorithms use {\em dominance frontiers}, a new concept that may have other applications. We also give analytical and experimental evidence that all of these data structures are usually linear in the size of the original program. This paper thus presents strong evidence that these structures can be of practical use in optimization.
The weakest failure detector for solving consensus We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg (1996), it is shown that {0, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as {0. Thus, {0is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Chord: A scalable peer-to-peer lookup service for internet applications A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Database relations with null values A new formal approach is proposed for modeling incomplete database information by means of null values. The basis of our approach is an interpretation of nulls which obviates the need for more than one type of null. The conceptual soundness of this approach is demonstrated by generalizing the formal framework of the relational data model to include null values. In particular, the set-theoretical properties of relations with nulls are studied and the definitions of set inclusion, set union, and set difference are generalized. A simple and efficient strategy for evaluating queries in the presence of nulls is provided. The operators of relational algebra are then generalized accordingly. Finally, the deep-rooted logical and computational problems of previous approaches are reviewed to emphasize the superior practicability of the solution.
The Anatomy of the Grid: Enabling Scalable Virtual Organizations "Grid" computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and, in some cases, high performance orientation. In this article, the authors define this new field. First, they review the "Grid problem," which is defined as flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources--what is referred to as virtual organizations. In such settings, unique authentication, authorization, resource access, resource discovery, and other challenges are encountered. It is this class of problem that is addressed by Grid technologies. Next, the authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing. The authors describe requirements that they believe any such mechanisms must satisfy and discuss the importance of defining a compact set of intergrid protocols to enable interoperability among different Grid systems. Finally, the authors discuss how Grid technologies relate to other contemporary technologies, including enterprise integration, application service provider, storage service provider, and peer-to-peer computing. They maintain that Grid concepts and technologies complement and have much to contribute to these other approaches.
McPAT: An integrated power, area, and timing modeling framework for multicore and manycore architectures This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90 nm to 22 nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22 nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Skip graphs Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.
The complexity of data aggregation in directed networks We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O(D) rounds, using O(log n)- size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires Ω(min{n, 1/ε2}/B) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires Ω(√n/B) rounds in networks of diameter 2, provided that the diameter is not known in advance to be o(√n/B). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal Õ(D+√n/B)-round algorithm for computing simple sensitive functions using messages of size B = Ω(logN), where N is a loose upper bound on the size of the network and D is the diameter.
A Local Passive Time Interpolation Concept for Variation-Tolerant High-Resolution Time-to-Digital Conversion Time-to-digital converters (TDCs) are promising building blocks for the digitalization of mixed-signal functionality in ultra-deep-submicron CMOS technologies. A short survey on state-of-the-art TDCs is given. A high-resolution TDC with low latency and low dead-time is proposed, where a coarse time quantization derived from a differential inverter delay-line is locally interpolated with passive vo...
Area- and Power-Efficient Monolithic Buck Converters With Pseudo-Type III Compensation Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant Gm/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75 % in both designs, compared to an equivalent conventional Type III compensator.
Fundamental analysis of a car to car visible light communication system This paper presents a mathematical model for car-to-car (C2C) visible light communications (VLC) that aims to predict the system performance under different communication geometries. A market-weighted headlamp beam pattern model is employed. We consider both the line-of-sight (LOS) and non-line-of-sight (NLOS) links, and outline the relationship between the communication distance, the system bit error rate (BER) performance and the BER distribution on a vertical plane. Results show that by placing a photodetector (PD) at a height of 0.2-0.4 m above road surface in the car, the communications coverage range can be extended up to 20 m at a data rate of 2Mbps.
A Heterogeneous PIM Hardware-Software Co-Design for Energy-Efficient Graph Processing Processing-In-Memory (PIM) is an emerging technology that addresses the memory bottleneck of graph processing. In general, analog memristor-based PIM promises high parallelism provided that the underlying matrix-structured crossbar can be fully utilized while digital CMOS-based PIM has a faster single-edge execution but its parallelism can be low. In this paper, we observe that there is no absolute winner between these two representative PIM technologies for graph applications, which often exhibit irregular workloads. To reap the best of both worlds, we introduce a new heterogeneous PIM hardware, called Hetraph, to facilitate energy-efficient graph processing. Hetraph incorporates memristor-based analog computation units (for high-parallelism computing) and CMOS-based digital computation cores (for efficient computing) on the same logic layer of a 3D die-stacked memory device. To maximize the hardware utilization, our software design offers a hardware heterogeneity-aware execution model and a workload offloading mechanism. For performance speedups, such a hardware-software co-design outperforms the state-of-the-art by 7.54 ×(CPU), 1.56 ×(GPU), 4.13× (memristor-based PIM) and 3.05× (CMOS-based PIM), on average. For energy savings, Hetraph reduces the energy consumption by 57.58× (CPU), 19.93× (GPU), 14.02 ×(memristor-based PIM) and 10.48 ×(CMOS-based PIM), on average.
1
0
0
0
0
0
0
0
0
0
0
0
0
0
A 4-Kb 1-to-8-bit Configurable 6T SRAM-Based Computation-in-Memory Unit-Macro for CNN-Based AI Edge Processors Previous SRAM-based computing-in-memory (SRAM-CIM) macros suffer small read margins for high-precision operations, large cell array area overhead, and limited compatibility with many input and weight configurations. This work presents a 1-to-8-bit configurable SRAM CIM unit-macro using: 1) a hybrid structure combining 6T-SRAM based in-memory binary product-sum (PS) operations with digital near-memory-computing multibit PS accumulation to increase read accuracy and reduce area overhead; 2) column-based place-value-grouped weight mapping and a serial-bit input (SBIN) mapping scheme to facilitate reconfiguration and increase array efficiency under various input and weight configurations; 3) a self-reference multilevel reader (SRMLR) to reduce read-out energy and achieve a sensing margin 2 <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\times $ </tex-math></inline-formula> that of the mid-point reference scheme; and 4) an input-aware bitline voltage compensation scheme to ensure successful read operations across various input-weight patterns. A 4-Kb configurable 6T-SRAM CIM unit-macro was fabricated using a 55-nm CMOS process with foundry 6T-SRAM cells. The resulting macro achieved access times of 3.5 ns per cycle (pipeline) and energy efficiency of 0.6–40.2 TOPS/W under binary to 8-b input/8-b weight precision.
Accelerating real-time embedded scene labeling with convolutional networks Today there is a clear trend towards deploying advanced computer vision (CV) systems in a growing number of application scenarios with strong real-time and power constraints. Brain-inspired algorithms capable of achieving record-breaking results combined with embedded vision systems are the best candidate for the future of CV and video systems due to their flexibility and high accuracy in the area of image understanding. In this paper, we present an optimized convolutional network implementation suitable for real-time scene labeling on embedded platforms. We show that our algorithm can achieve up to 96GOp/s, running on the Nvidia Tegra K1 embedded SoC. We present experimental results, compare them to the state-of-the-art, and demonstrate that for scene labeling our approach achieves a 1.5x improvement in throughput when compared to a modern desktop CPU at a power budget of only 11 W.
Computing in Memory with Spin-Transfer Torque Magnetic RAM. In-memory computing is a promising approach to addressing the processor-memory data transfer bottleneck in computing systems. We propose spin-transfer torque compute-in-memory (STT-CiM), a design for in-memory computing with spin-transfer torque magnetic RAM (STT-MRAM). The unique properties of spintronic memory allow multiple wordlines within an array to be simultaneously enabled, opening up the ...
MRIMA: An MRAM-Based In-Memory Accelerator In this paper, we propose MRIMA, as a novel magnetic RAM (MRAM)-based in-memory accelerator for nonvolatile, flexible, and efficient in-memory computing. MRIMA transforms current spin transfer torque magnetic random access memory (STT-MRAM) arrays to massively parallel computational units capable of working as both nonvolatile memory and in-memory logic. Instead of integrating complex logic units in cost-sensitive memory, MRIMA exploits hardware-friendly bit-line computing methods to implement complete Boolean logic functions between operands within a memory array in a single clock cycle, overcoming the multicycle logic issue in contemporary processing-in-memory (PIM) platforms. We present practical case studies to demonstrate MRIMA’s acceleration for binary-weight and low bit-width convolutional neural networks (CNNs) as well as data encryption. Our device-to-architecture co-simulation results on CNN acceleration demonstrate that MRIMA can obtain <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1.7 {\times }$ </tex-math></inline-formula> better energy-efficiency and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$11.2{\times }$ </tex-math></inline-formula> speed-up compared to ASICs, and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1.8 {\times }$ </tex-math></inline-formula> better energy-efficiency and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.4 {\times }$ </tex-math></inline-formula> speed-up over the best DRAM-based PIM solutions. As an advanced encryption standard (AES) in-memory encryption engine, MRIMA shows ~77% and 21% lower energy consumption compared to CMOS-ASIC and recent domain-wall-based design, respectively.
Time-Domain Computing in Memory Using Spintronics for Energy-Efficient Convolutional Neural Network The data transfer bottleneck in Von Neumann architecture owing to the separation between processor and memory hinders the development of high-performance computing. The computing in memory (CIM) concept is widely considered as a promising solution for overcoming this issue. In this article, we present a time-domain CIM (TD-CIM) scheme using spintronics, which can be applied to construct the energy-efficient convolutional neural network (CNN). Basic Boolean logic operations are implemented through recording the bit-line output at different moments. A multi-addend addition mechanism is then introduced based on the TD-CIM circuit, which can eliminate the cascaded full adders. To further optimize the compatibility of TD-CIM circuit for CNN, we also propose a quantization method that transforms floating-point parameters of pre-trained CNN models into fixed-point parameters. Finally, we build a TD-CIM architecture integrating with a highly reconfigurable array of field-free spin-orbit torque magnetic random access memory (SOT-MRAM) and evaluate its benefits for the quantized CNN. By performing digit recognition with the MNIST dataset, we find that the delay and energy are respectively reduced by 1.22.7 times and 2.4×10 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">3</sup> -1.1×10 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">4</sup> times compared with STT-CIM and CRAM based on spintronic memory. Finally, the recognition accuracy can reach 98.65% and 91.11% on MNIST and CIFAR10, respectively.
In-Memory Low-Cost Bit-Serial Addition Using Commodity DRAM Technology In-memory computing architectures present a promising solution to address the memory- and the power-wall challenges by mitigating the bottleneck between processing units and storage. Such architectures incorporate computing functionalities inside memory arrays to make better use of the large internal memory bandwidth, thereby, avoiding frequent data movements. In-DRAM computing architectures offer high throughput and energy improvements in accelerating modern data-intensive applications like machine learning <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">etc</italic> . In this manuscript, we propose a vector addition methodology inside DRAM arrays through functional read enabled on local word-lines. The proposed primitive performs majority-based addition operations by storing data in <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">transposed</italic> manner. Majority functions are achieved in DRAM cells by activating odd number of rows simultaneously. The proposed majority based <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">bit-serial</italic> addition enables huge parallelism and high throughput. We validate the robustness of the proposed in-DRAM computing methodology under process variations to ascertain its reliability. Energy evaluation of the proposed scheme shows 21.7X improvement compared to normal data read operations in standard DDR3-1333 interface. Moreover, compared to state-of-the-art in-DRAM compute proposals, the proposed scheme provides one of the fastest addition mechanisms with low area overhead (< 1% of DRAM chip area). Our system evaluation running the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${k}$ </tex-math></inline-formula> -Nearest Neighbor ( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${k}$ </tex-math></inline-formula> NN) algorithm on the MNIST handwritten digit classification dataset shows 11.5X performance improvement compared to a conventional von-Neumann machine.
Area-Efficient and Variation-Tolerant In-Memory BNN Computing using 6T SRAM Array We introduce a SRAM-based binary neural network (BNN) hardware which uses a single 6T SRAM cell for XNOR operation for the first time. The cell is 45% smaller than the previous 8T bitcell for XNOR operation. We also propose an in-memory calibration and batch normalization to achieve more reliable operation under the presence of process variation.
Wireless sensor networks: a survey This paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology, wireless communications and digital electronics. First, the sensing tasks and the potential sensor networks applications are explored, and a review of factors influencing the design of sensor networks is provided. Then, the communication architecture for sensor networks is outlined, and the algorithms and protocols developed for each layer in the literature are explored. Open research issues for the realization of sensor networks are also discussed.
Reaching Agreement in the Presence of Faults The problem addressed here concerns a set of isolated processors, some unknown subset of which may be faulty, that communicate only by means of two-party messages. Each nonfaulty processor has a private value of information that must be communicated to each other nonfaulty processor. Nonfaulty processors always communicate honestly, whereas faulty processors may lie. The problem is to devise an algorithm in which processors communicate their own values and relay values received from others that allows each nonfaulty processor to infer a value for each other processor. The value inferred for a nonfaulty processor must be that processor's private value, and the value inferred for a faulty one must be consistent with the corresponding value inferred by each other nonfaulty processor.It is shown that the problem is solvable for, and only for, n ≥ 3m + 1, where m is the number of faulty processors and n is the total number. It is also shown that if faulty processors can refuse to pass on information but cannot falsely relay information, the problem is solvable for arbitrary n ≥ m ≥ 0. This weaker assumption can be approximated in practice using cryptographic methods.
Software-defined radio receiver: dream to reality This article describes a fully integrated 90 nm CMOS software-defined radio receiver operating in the 800 MHz to 5 GHz band. Unlike the classical SDR paradigm, which digitizes the whole spectrum uniformly, this receiver acts as a signal conditioner for the analog-to-digital converters, emphasizing only the wanted channel. Thus, the ADCs operate with modest resolution and sample rate, consuming low power. This approach makes portable SDR a reality
A Digital Requantizer With Shaped Requantization Noise That Remains Well Behaved After Nonlinear Distortion A major problem in oversampling digital-to-analog converters and fractional-N frequency synthesizers, which are ubiquitous in modern communication systems, is that the noise they introduce contains spurious tones. The spurious tones are the result of digitally generated, quantized signals passing through nonlinear analog components. This paper presents a new method of digital requantization called successive requantization, special cases of which avoids the spurious tone generation problem. Sufficient conditions are derived that ensure certain statistical properties of the quantization noise, including the absence of spurious tones after nonlinear distortion. A practical example is presented and shown to satisfy these conditions.
Permanent-magnets linear actuators applicability in automobile active suspensions Significant improvements in automobile suspension performance are achieved by active systems. However, current active suspension systems are too expensive and complex. Developments occurring in power electronics, permanent magnet materials, and microelectronic systems justifies analysis of the possibility of implementing electromagnetic actuators in order to improve the performance of automobile suspension systems without excessively increasing complexity and cost. In this paper, the layouts of hydraulic and electromagnetic active suspensions are compared. The actuator requirements are calculated, and some experimental results proving that electromagnetic suspension could become a reality in the future are shown.
Electromagnetic regenerative suspension system for ground vehicles This paper considers an electromagnetic regenerative suspension system (ERSS) that recovers the kinetic energy originated from vehicle vibration, which is previously dissipated in traditional shock absorbers. It can also be used as a controllable damper that can improve the vehicle's ride and handling performance. The proposed electromagnetic regenerative shock absorbers (ERSAs) utilize a linear or a rotational electromagnetic generator to convert the kinetic energy from suspension vibration into electricity, which can be used to reduce the load on the alternator so as to improve fuel efficiency. A complete ERSS is discussed here that includes the regenerative shock absorber, the power electronics for power regulation and suspension control, and an electronic control unit (ECU). Different shock absorber designs are proposed and compared for simplicity, efficiency, energy density, and controlled suspension performances. Both simulation and experiment results are presented and discussed.
A Sub- $\mu$ W Reconfigurable Front-End for Invasive Neural Recording That Exploits the Spectral Characteristics of the Wideband Neural Signal This paper presents a sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> ac-coupled reconfigurable front-end for invasive wideband neural signal recording. The proposed topology embeds filtering capabilities enabling the selection of different frequency bands inside the neural signal spectrum. Power consumption is optimized by defining specific noise targets for each sub-band. These targets take into account the spectral characteristics of wideband neural signals: local field potentials (LFP) exhibit <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mathrm {1/f^{x}}$ </tex-math></inline-formula> magnitude scaling while action potentials (AP) show uniform magnitude across frequency. Additionally, noise targets also consider electrode noise and the spectral distribution of noise sources in the circuit. An experimentally verified prototype designed in a standard 180 nm CMOS process draws 815 nW from a 1 V supply. The front-end is able to select among four different frequency bands (modes) up to 5 kHz. The measured input-referred spot-noise at 500 Hz in the LFP mode (1 Hz - 700 Hz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$55~nV/\sqrt {Hz}$ </tex-math></inline-formula> while the integrated noise in the AP mode (200 Hz - 5 kHz) is <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$4.1~\mu Vrms$ </tex-math></inline-formula> . The proposed front-end achieves sub- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mu \text{W}$ </tex-math></inline-formula> operation without penalizing other specifications such as input swing, common-mode or power-supply rejection ratios. It reduces the power consumption of neural front-ends with spectral selectivity by <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$6.1\times $ </tex-math></inline-formula> and, compared with conventional wideband front-ends, it obtains a reduction of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.5\times $ </tex-math></inline-formula> .
1.1
0.1
0.1
0.1
0.1
0.033333
0.02
0
0
0
0
0
0
0